• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 235
  • 84
  • 54
  • 32
  • 31
  • 26
  • 9
  • 7
  • 6
  • 6
  • 6
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 571
  • 101
  • 73
  • 59
  • 50
  • 49
  • 48
  • 46
  • 46
  • 44
  • 44
  • 39
  • 38
  • 34
  • 33
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
331

Use of Partial Cumulative Sum to Detect Trends and Change Periods in Time Series Analysis with Fuzzy Statistics

陳力揚 Unknown Date (has links)
轉折點與趨勢的研究在時間數列分析、經濟與財務領域裡一直是重要的研究主題。隨著所欲研究的物件之結構複雜性日益增加,再加上人類的知識語言因人類本身的主觀意識、不同時間、環境的變遷與研判事件的角度等條件下,可能使得所蒐集到的時間數列資料具某種程度的模糊性。為此,Zadeh[1965]提出了模糊理論,專門解決這一類的問題。在討論時間數列分析中的轉折點與趨勢問題時,常常會遇到時間數列的轉折過程緩慢且不明顯的情況。因此傳統的轉折點研究方法在這種情形下便顯得不足。對此,許多學者提出了轉折區間的概念。然而轉折區間的概念仍然存在一個潛在的困擾:在一個小的時間區間下,一個被認定的轉折區間可能在時間區間拉得很長的情況下,被視為是一個不重要的擾動或雜訊。本文嘗試藉由模糊統計量,提出一個轉折區間與趨勢的偵測方法。與其他轉折區間偵測法不同的是我們所提的方法能藉由控制參數,偵測到合乎使用者需求的轉折區間,進而找到一個趨勢的起點與終點。藉此避免把雜訊當成轉折區間或把轉折區間當成雜訊的困擾。因為使用了模糊統計量,同時也解決了資料的模糊性問題。 / Because the structural change of a time series from one pattern to another may not switch at once but rather experience a period of adjustment time, conventional change points detection may be inappropriate to apply under this circumstance. Furthermore, changes in time series often occur gradually so that there is a certain amount of fuzziness in the change point. For this, many research have focused on the theory of change periods detection for a better model to fit. However, a change period in some small observation time interval may seem a neglectable noise in a larger observation time interval. In this paper, we propose an approach to detect trends and change periods with fuzzy statistics through using partial cumulative sum. By controlling the parameters, we can filter the noises and find out suitable change periods. With the change periods, we can further find the trends in a time series. Finally, some simulated data and empirical examples are studied to test our approach. Simulation and empirical results show that the performance of our approach is satisfactorily successful.
332

Hardness of Constraint Satisfaction and Hypergraph Coloring : Constructions of Probabilistically Checkable Proofs with Perfect Completeness

Huang, Sangxia January 2015 (has links)
A Probabilistically Checkable Proof (PCP) of a mathematical statement is a proof written in a special manner that allows for efficient probabilistic verification. The celebrated PCP Theorem states that for every family of statements in NP, there is a probabilistic verification procedure that checks the validity of a PCP proof by reading only 3 bits from it. This landmark theorem, and the works leading up to it, laid the foundation for many subsequent works in computational complexity theory, the most prominent among them being the study of inapproximability of combinatorial optimization problems. This thesis focuses on a broad class of combinatorial optimization problems called Constraint Satisfaction Problems (CSPs). In an instance of a CSP problem of arity k, we are given a set of variables taking values from some finite domain, and a set of constraints each involving a subset of at most k variables. The goal is to find an assignment that simultaneously satisfies as many constraints as possible. An alternative formulation of the goal that is commonly used is Gap-CSP, where the goal is to decide whether a CSP instance is satisfiable or far from satisfiable, where the exact meaning of being far from satisfiable varies depending on the problems.We first study Boolean CSPs, where the domain of the variables is {0,1}. The main question we study is the hardness of distinguishing satisfiable Boolean CSP instances from those for which no assignment satisfies more than some epsilon fraction of the constraints. Intuitively, as the arity increases, the CSP gets more complex and thus the hardness parameter epsilon should decrease. We show that for Boolean CSPs of arity k, it is NP-hard to distinguish satisfiable instances from those that are at most 2^{~O(k^{1/3})}/2^k-satisfiable. We also study coloring of graphs and hypergraphs. Given a graph or a hypergraph, a coloring is an assignment of colors to vertices, such that all edges or hyperedges are non-monochromatic. The gap problem is to distinguish instances that are colorable with a small number of colors, from those that require a large number of colors. For graphs, we prove that there exists a constant K_0&gt;0, such that for any K &gt;= K_0, it is NP-hard to distinguish K-colorable graphs from those that require 2^{Omega(K^{1/3})} colors. For hypergraphs, we prove that it is quasi-NP-hard to distinguish 2-colorable 8-uniform hypergraphs of size N from those that require 2^{(log N)^{1/4-o(1)}} colors. In terms of techniques, all these results are based on constructions of PCPs with perfect completeness, that is, PCPs where the probabilistic proof verification procedure always accepts a correct proof. Not only is this a very natural property for proofs, but it can also be an essential requirement in many applications. It has always been particularly challenging to construct PCPs with perfect completeness for NP statements due to limitations in techniques. Our improved hardness results build on and extend many of the current approaches. Our Boolean CSP result and GraphColoring result were proved by adapting the Direct Sum of PCPs idea by Siu On Chan to the perfect completeness setting. Our proof for hypergraph coloring hardness improves and simplifies the recent work by Khot and Saket, in which they proposed the notion of superposition complexity of CSPs. / Ett probabilistiskt verifierbart bevis (eng: Probabilistically Checkable Proof, PCP) av en matematisk sats är ett bevis skrivet på ett speciellt sätt vilket möjliggör en effektiv probabilistisk verifiering. Den berömda PCP-satsen säger att för varje familj av påståenden i NP finns det en probabilistisk verifierare som kontrollerar om en PCP bevis är giltigt genom att läsa endast 3 bitar från det. Denna banbrytande sats, och arbetena som ledde fram till det, lade grunden för många senare arbeten inom komplexitetsteorin, framförallt inom studiet av approximerbarhet av kombinatoriska optimeringsproblem. I denna avhandling fokuserar vi på en bred klass av optimeringsproblem i form av villkorsuppfyllningsproblem (engelska ``Constraint Satisfaction Problems'' CSPs). En instans av ett CSP av aritet k ges av en mängd variabler som tar värden från någon ändlig domän, och ett antal villkor som vart och ett beror på en delmängd av högst k variabler. Målet är att hitta ett tilldelning av variablerna som samtidigt uppfyller så många som möjligt av villkoren. En alternativ formulering av målet som ofta används är Gap-CSP, där målet är att avgöra om en CSP-instans är satisfierbar eller långt ifrån satisfierbar, där den exakta innebörden av att vara ``långt ifrån satisfierbar'' varierar beroende på problemet.Först studerar vi booleska CSPer, där domänen är {0,1}. Den fråga vi studerar är svårigheten av att särskilja satisfierbara boolesk CSP-instanser från instanser där den bästa tilldelningen satisfierar högst en andel epsilon av villkoren. Intuitivt, när ariten ökar blir CSP mer komplexa och därmed bör svårighetsparametern epsilon avta med ökande aritet. Detta visar sig vara sant och ett första resultat är att för booleska CSP av aritet k är det NP-svårt att särskilja satisfierbara instanser från dem som är högst 2^{~O(k^{1/3})}/2^k-satisfierbara. Vidare studerar vi färgläggning av grafer och hypergrafer. Givet en graf eller en hypergraf, är en färgläggning en tilldelning av färger till noderna, så att ingen kant eller hyperkant är monokromatisk. Problemet vi analyserar är att särskilja instanser som är färgbara med ett litet antal färger från dem som behöver många färger. För grafer visar vi att det finns en konstant K_0&gt;0, så att för alla K &gt;= K_0 är det NP-svårt att särskilja grafer som är K-färgbara från dem som kräver minst 2^{Omega(K^{1/3})} färger. För hypergrafer visar vi att det är kvasi-NP-svårt att särskilja 2-färgbara 8-likformiga hypergrafer som har N noder från dem som kräv minst 2^{(log N)^{1/4-o(1)}} färger. Samtliga dessa resultat bygger på konstruktioner av PCPer med perfekt fullständighet. Det vill säga PCPer där verifieraren alltid accepterar ett korrekt bevis. Inte bara är detta en mycket naturlig egenskap för PCPer, men det kan också vara ett nödvändigt krav för vissa tillämpningar. Konstruktionen av PCPer med perfekt fullständighet för NP-påståenden ger tekniska komplikationer och kräver delvis utvecklande av nya metoder. Vårt booleska CSPer resultat och vårt Färgläggning resultat bevisas genom att anpassa ``Direktsumman-metoden'' introducerad av Siu On Chan till fallet med perfekt fullständighet. Vårt bevis för hypergraffärgningssvårighet förbättrar och förenklar ett färskt resultat av Khot och Saket, där de föreslog begreppet superpositionskomplexitet av CSP. / <p>QC 20150916</p>
333

A CAD tool for current-mode multiple-valued CMOS circuits

Lee, Hoon S. 12 1900 (has links)
Approved for public release; distribution is unlimited / The contribution of this thesis is the development of a CAD (computer aided design) tool for current mode multiple-valued logic (MVL) CMOS circuits. It is only the second known MVL CAD tool and the first CAD tool for MVL CMOS. The tool accepts a specification of the function to be realized by the user, produces a minimal or near-minimal realization (if such a realization is possible), and produces a layout of a programmable logic array (PLA) integrated circuit that realizes the given function. The layout is in MAGIC format, suitable for submission to a chip manufacturer. The CAD tool also allows the user to simulate the realized function so that he/she can verify correctness of design. The CAD tool is designed also to be an analysis tool for heuristic minimization algorithms. As part of this thesis, a random function generator and statistics gathering package were developed. In the present tool, two heuristics are provided and the user can choose one or both. In the latter case, the better realization is output to the user. The CAD tool is designed to be flexible, so that future improvements can be made in the heuristic algorithms, as well as the layout generator. Thus, the tool can be used to accommodate new technologies, for example, a voltage mode CMOS PLA rather than the current mode CMOS currently implemented. / http://archive.org/details/cadtoolforcurren00leeh / Lieutenant, Republic of Korea Navy
334

Strategies for Deriving a Single Measure of the Overall Burden of Antimicrobial Resistance in Hospitals

Orlando, Alessandro 11 May 2010 (has links)
Background: Antimicrobial-resistant infections result in hospital stays costing between $18,000 and $29,000. As of 2009, Centers for Medicare and Medicaid Services no longer upgrade payments for hospital-acquired infections. Hospital epidemiologists monitor and document rates of individual resistant microbes in antibiogram reports. Overall summary measures capturing resistance within a hospital may be useful. Objectives: We applied four techniques (L1- and L2-principal component analysis (PCA), desirability functions, and simple summary) to create summary measures of resistance and described the four summary measures with respect to reliability, proportion of variance explained, and clinical utility. Methods: We requested antibiograms from hospitals participating in the University HealthSystem Consortium for the years 2002–2008 (n=40). A clinical team selected organism-drug resistant pairs (as resistant isolates per 1,000 patient days) based on 1) virulence, 2) complicated or toxic therapies, 3) transmissibility, and 4) high incidence with increasing levels of resistance. Four methods were used to create summary scores: 1) L1- and L2-PCA: derived multipliers so that the variance explained is maximized; 2) desirability function: transformed resistance data to be between 0 and 1; 3) simple sum: each resistance rate was added and divided by the square root of the total number of microbes summed. Simple correlation analyses between time and each summary score evaluated reliability. For each year, we calculated the proportion of explained variance by dividing each summary score’s variance by the variance in the original data. Clinical utility was checked by comparing the trends for all of the individual microbe’s resistance rates to the trends seen in the summary scores for each hospital. Results: Proportion of variance explained by L1- and L2-PCA and the simple sum was 0.61, 0.62, and 0.29 respectively. Simple sum and L1- and L2-PCA summary scores best followed the trends seen in the individual antimicrobial resistance rates; trends in desirability function scores deviated from those seen in individual trends of antimicrobial resistance. L1- and L2-PCA summary scores were more influenced by MRSA rates, and the simple sum score was less influenced. Pearson correlation coefficients revealed good reliability through time. Conclusion: Deriving summary measures of antimicrobial resistance can be reliable over time and explain a high proportion of variance. Infection control practitioners and hospital epidemiologists may find the inclusion of a summary score of antimicrobial resistance beneficial in describing the trends of overall resistance in their yearly antibiogram reports.
335

Optimisation multiobjectif de réseaux de transport de gaz naturel / Multiobjective optimization of natural gas transportation networks

Hernandez-Rodriguez, Guillermo 19 September 2011 (has links)
L'optimisation de l'exploitation d'un réseau de transport de gaz naturel (RTGN) est typiquement un problème d'optimisation multiobjectif, faisant intervenir notamment la minimisation de la consommation énergétique dans les stations de compression, la maximisation du rendement, etc. Cependant, très peu de travaux concernant l'optimisation multiobjectif des réseaux de gazoducs sont présentés dans la littérature. Ainsi, ce travail vise à fournir un cadre général de formulation et de résolution de problèmes d'optimisation multiobjectif liés aux RTGN. Dans la première partie de l'étude, le modèle du RTGN est présenté. Ensuite, diverses techniques d'optimisation multiobjectif appartenant aux deux grandes classes de méthodes par scalarisation, d'une part, et de procédures évolutionnaires, d'autre part, communément utilisées dans de nombreux domaines de l'ingénierie, sont détaillées. Sur la base d'une étude comparative menée sur deux exemples mathématiques et cinq problèmes de génie des procédés (incluant en particulier un RTGN), un algorithme génétique basé sur une variante de NSGA-II, qui surpasse les méthodes de scalarisation, de somme pondérée et d'ε-Contrainte, a été retenu pour résoudre un problème d'optimisation tricritère d'un RTGN. Tout d'abord un problème monocritère relatif à la minimisation de la consommation de fuel dans les stations de compression est résolu. Ensuite un problème bicritère, où la consommation de fuel doit être minimisée et la livraison de gaz aux points terminaux du réseau maximisée, est présenté ; l'ensemble des solutions non dominées est répresenté sur un front de Pareto. Enfin l'impact d'injection d'hydrogène dans le RTGN est analysé en introduisant un troisième critère : le pourcentage d'hydrogène injecté dans le réseau que l'on doit maximiser. Dans les deux cas multiobjectifs, des méthodes génériques d'aide à la décision multicritère sont mises en oeuvre pour déterminer les meilleures solutions parmi toutes celles déployées sur les fronts de Pareto. / The optimization of a natural gas transportation network (NGTN) is typically a multiobjective optimization problem, involving for instance energy consumption minimization at the compressor stations and gas delivery maximization. However, very few works concerning multiobjective optimization of gas pipelines networks are reported in the literature. Thereby, this work aims at providing a general framework of formulation and resolution of multiobjective optimization problems related to NGTN. In the first part of the study, the NGTN model is described. Then, various multiobjective optimization techniques belonging to two main classes, scalarization and evolutionary, commonly used for engineering purposes, are presented. From a comparative study performed on two mathematical examples and on five process engineering problems (including a NGTN), a variant of the multiobjective genetic algorithm NSGA-II outmatches the classical scalararization methods, Weighted-sum and ε-Constraint. So NSGA-II has been selected for performing the triobjective optimization of a NGTN. First, the monobjective problem related to the minimization of the fuel consumption in the compression stations is solved. Then a biojective problem, where the fuel consumption has to be minimized, and the gas mass flow delivery at end-points of the network maximized, is presented. The non dominated solutions are displayed in the form of a Pareto front. Finally, the study of the impact of hydrogen injection in the NGTN is carried out by introducing a third criterion, i.e., the percentage of injected hydrogen to be maximized. In the two multiobjective cases, generic Multiple Choice Decision Making tools are implemented to identify the best solution among the ones displayed of the Pareto fronts.
336

A Hierarchical Bayesian Model for the Unmixing Analysis of Compositional Data subject to Unit-sum Constraints

Yu, Shiyong 15 May 2015 (has links)
Modeling of compositional data is emerging as an active area in statistics. It is assumed that compositional data represent the convex linear mixing of definite numbers of independent sources usually referred to as end members. A generic problem in practice is to appropriately separate the end members and quantify their fractions from compositional data subject to nonnegative and unit-sum constraints. A number of methods essentially related to polytope expansion have been proposed. However, these deterministic methods have some potential problems. In this study, a hierarchical Bayesian model was formulated, and the algorithms were coded in MATLABÒ. A test run using both a synthetic and real-word dataset yields scientifically sound and mathematically optimal outputs broadly consistent with other non-Bayesian methods. Also, the sensitivity of this model to the choice of different priors and structure of the covariance matrix of error were discussed.
337

Smluvní pokuta - frekventovaný prostředek zajištění závazkových vztahů / Contractual penalty – the frequent type of security

Šedová, Klára January 2010 (has links)
Contractual penalty is an effective and in practice often used type of security. However, we cannot consider the Czech legal regulation of the contractual penalty as ideal and there have been many difficulties connected with the application of this instrument. The thesis aims at clarification of the functions of contractual penalty, conditions for its valid and effective creation and consequences of the excessive sum of contractual penalty. Furthermore, the thesis focuses on the relation between contractual penalty and other legal instruments and finally also on comparison with other types of security. Main legal sources of the final thesis are court decisions, especially judgments of the Supreme Court of the Czech Republic. In the thesis there are used methods of historical and comparative interpretation.
338

Hybrid metaheuristic algorithms for sum coloring and bandwidth coloring / Métaheuristiques hybrides pour la somme coloration et la coloration de bande passante

Jin, Yan 29 May 2015 (has links)
Le problème de somme coloration minimum (MSCP) et le problème de coloration de bande passante (BCP) sont deux généralisations importantes du problème de coloration des sommets classique avec de nombreuses applications dans divers domaines, y compris la conception de circuits imprimés, la planication, l’allocation de ressource, l’affectation de fréquence dans les réseaux mobiles, etc. Les problèmes MSCP et BCP étant NP-difficiles, les heuristiques et métaheuristiques sont souvent utilisées en pratique pour obtenir des solutions de bonne qualité en un temps de calcul acceptable. Cette thèse est consacrée à des métaheuristiques hybrides pour la résolution efcace des problèmes MSCP et BCP. Pour le problème MSCP, nous présentons deux algorithmes mémétiques qui combinent l’évolution d’une population d’individus avec de la recherche locale. Pour le problème BCP, nous proposons un algorithme hybride à base d’apprentissage faisant coopérer une méthode de construction “informée” avec une procédure de recherche locale. Les algorithmes développés sont évalués sur des instances biens connues et se révèlent très compétitifs par rapport à l’état de l’art. Les principaux composants des algorithmes que nous proposons sont également analysés. / The minimum sum coloring problem (MSCP) and the bandwidth coloring problem (BCP) are two important generalizations of the classical vertex coloring problem with numerous applications in diverse domains, including VLSI design, scheduling, resource allocation and frequency assignment in mobile networks, etc. Since the MSCP and BCP are NP-hard problems, heuristics and metaheuristics are practical solution methods to obtain high quality solutions in an acceptable computing time. This thesis is dedicated to developing effective hybrid metaheuristic algorithms for the MSCP and BCP. For the MSCP, we present two memetic algorithms which combine population-based evolutionary search and local search. An effective algorithm for maximum independent set is devised for generating initial solutions. For the BCP, we propose a learning-based hybrid search algorithm which follows a cooperative framework between an informed construction procedure and a local search heuristic. The proposed algorithms are evaluated on well-known benchmark instances and show highly competitive performances compared to the current state-of-the-art algorithms from the literature. Furthermore, the key issues of these algorithms are investigated and analyzed.
339

Algoritmos não-paramétricos para detecção de pontos de mudança em séries temporais de alta frequência / Non-parametric change-point detection algorithms for high-frequency time series

Cardoso, Vitor Mendes 05 July 2018 (has links)
A área de estudos econométricos visando prever o comportamento dos mercados financeiros aparece cada vez mais como uma área de pesquisas dinâmica e abrangente. Dentro deste universo, podemos de maneira geral separar os modelos desenvolvidos em paramétricos e não paramétricos. O presente trabalho tem como objetivo investigar técnicas não-paramétricas derivadas do CUSUM, ferramenta gráfica que se utiliza do conceito de soma acumulada originalmente desenvolvida para controles de produção e de qualidade. As técnicas são utilizadas na modelagem de uma série cambial (USD/EUR) de alta frequência com diversos pontos de negociação dentro de um mesmo dia / The area of econometric studies to predict the behavior of financial markets increasingly proves itself as a dynamic and comprehensive research area. Within this universe, we can generally separate the models developed in parametric and non-parametric. The present work aims to investigate non-parametric techniques derived from CUSUM, a graphical tool that uses the cumulative sum concept originally developed for production and quality controls. The techniques are used in the modeling of a high frequency exchange series (USD/EUR) with several trading points within the same day
340

Metodologia para o diagnóstico em tempo real de para-raios em sistemas de distribuição e transmissão de energia elétrica / Methodology for Real Time Diagnostic of Surge Arresters in Electric Energy Distribution and Transmission Systems

Alves, Marcos Eduardo Guerra 08 November 2013 (has links)
Dada a importância dos para-raios para a proteção dos diversos equipamentos e instalações nos sistemas de distribuição e transmissão de energia elétrica contra danos provocados por sobretensões transitórias, quer sejam originadas por descargas atmosféricas, quer por estabelecimento e interrupção de cargas reativas (sobretensões de manobra), é apresentada uma nova metodologia de diagnóstico de seu estado. São apresentadas também as formas construtivas dos para-raios com tecnologias de carboneto de silício (SiC) e óxido de zinco (ZnO), as quais são associadas a modelos elétricos de representação completos e simplificados, de forma a facilitar a análise dos métodos de diagnóstico. Os métodos atualmente empregados para o diagnóstico dos para-raios, tanto fora de serviço quanto durante a operação, bem como suas potencialidades e pontos falhos são explanados, para os equipamentos de SiC e de ZnO. O novo método de diagnóstico proposto nesse trabalho é introduzido a seguir, baseado na monitoração da capacitância equivalente e resistência equivalente do para-raio, juntamente com simulações dos diversos tipos de defeitos passíveis de ocorrência em equipamentos de SiC e ZnO, verificando-se os parâmetros de medição afetados por cada um deles, de forma a estabelecer a efetividade do novo método de monitoração para a detecção dos defeitos. Também é apresentado um método para viabilização da monitoração em tempo real de capacitância e de resistência equivalentes, através da técnica de soma vetorial das correntes de fuga, atualmente já empregada para monitoração de buchas capacitivas de transformadores e outros equipamentos. Por fim, são apresentados os resultados esperados com o novo método de monitoração e as sugestões de novas etapas para trabalhos futuros. / Given the importance of surge arresters for the protection of several devices and installations in electric energy distribution and transmission systems from damages caused by transitory overvoltages, either originated from atmospheric discharges or by closing or opening of reactive loads (switching overvoltages), a new methodology for diagnosing their condition is presented. The constructive forms of surge arresters with silicon carbide (SiC) and zinc oxide (ZnO) technologies are presented, as well as their associated electric representation models, complete and simplified, so as to facilitate the analysis of diagnostic methods. Surge arrester diagnostic methods are presented, both off-line and on-line, together with their potentialities and weak points, for SiC and ZnO arresters. The new diagnostic method proposed in this work is introduced next, based on monitoring of the arrester equivalent capacitance and equivalent resistance, followed by simulations of the several possible defect types in SiC and ZnO devices. The measured parameters affected by each defect type are checked in order to establish the effectiveness of the new monitoring method for the detection of arrester problems. A method for making the monitoring of equivalent capacitance and resistance is also presented, by using the vector sum of leakage currents technique, largely used for monitoring of capacitive bushings in power transformers and other equipment. Finally, the results expected with the new monitoring method and suggestions for future work are presented.

Page generated in 0.0692 seconds