• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 171
  • 54
  • 50
  • 49
  • 10
  • 8
  • 8
  • 6
  • 5
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 447
  • 95
  • 73
  • 71
  • 66
  • 56
  • 46
  • 43
  • 43
  • 38
  • 37
  • 33
  • 32
  • 32
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

An Assessment And Analysis Tool For Statistical Process Control Of Software Processes

Kirbas, Serkan 01 February 2007 (has links) (PDF)
Statistical process control (SPC) which includes very powerful techniques used in other mature engineering disciplines for providing process control is not used by many software organizations. In software engineering domain, SPC is currently utilized only by organizations which have high maturity levels according to the process improvement models like CMM, ISO/IEC 15504 and CMMI. Guidelines and software tools to implement SPC techniques should be developed for effective use and dissemination of SPC especially for low maturity organizations. In this thesis, a software tool (SPC-AAT) which we developed to assess the suitability of software processes and metrics for SPC and use of SPC tools is presented. With SPC-AAT, we aim to ease and enhance application of SPC especially for emergent and low maturity organizations. Control charts, histograms, bar charts and pareto charts are the supported SPC tools for this purpose. We also explained the validation of the tool over two processes of a software organization in three case studies.
152

Video Distribution Over Ip Networks

Ozdem, Mehmet 01 February 2007 (has links) (PDF)
As applications like IPTV and VoD (Video on demand) are gaining popularity, it is becoming more important to study the behavior of video signals in the Internet access infrastructures such as ADSL and cable networks. Average delay, average jitter and packet loss in these networks affect the quality of service, hence transmission and access speeds need to be determined such that these parameters are minimized. In this study the behavior of the above mentioned IP networks under variable bit rate (VBR) video traffic is investigated. ns-2 simulator is used for this purpose and actual as well as artificially generated signals are applied to the networks under test. Variable bit rate (VBR) traffic is generated synthetically using ON/OFF sources with ON/OFF times taken from exponential or Pareto distributions. As VBR video shows long range dependence with a Hurst parameter between 0.5 and 1, this parameter was used as a metric to measure the accuracy of the synthetic sources. Two different topologies were simulated in this study: one similar to ADSL access networks and the other behaving like cable distribution network. The performance of the networks (delay, jitter and packet loss) under VBR video traffic and different access speeds were measured. According to the obtained results, minimum access speeds in order achieve acceptable quality video delivery to the customers were suggested.
153

Analysis of Carbon Policies for Electricity Networks with High Penetration of Green Generation

Feijoo, Felipe 01 January 2015 (has links)
In recent decades, climate change has become one of the most crucial challenges for humanity. Climate change has a direct correlation with global warming, caused mainly by the green house gas emissions (GHG). The Environmental Protection Agency in the U.S. (EPA) attributes carbon dioxide to account for approximately 82\% of the GHG emissions. Unfortunately, the energy sector is the main producer of carbon dioxide, with China and the U.S. as the highest emitters. Therefore, there is a strong (positive) correlation between energy production, global warming, and climate change. Stringent carbon emissions reduction targets have been established in order to reduce the impacts of GHG. Achieving these emissions reduction goals will require implementation of policies like as cap-and-trade and carbon taxes, together with transformation of the electricity grid into a smarter system with high green energy penetration. However, the consideration of policies solely in view of carbon emissions reduction may adversely impact other market outcomes such as electricity prices and consumption. In this dissertation, a two-layer mathematical-statistical framework is presented, that serves to develop carbon policies to reduce emissions level while minimizing the negative impacts on other market outcomes. The bottom layer of the two layer model comprises a bi-level optimization problem. The top layer comprises a statistical model and a Pareto analysis. Two related but different problems are studied under this methodology. The first problem looks into the design of cap-and-trade policies for deregulated electricity markets that satisfy the interest of different market constituents. Via the second problem, it is demonstrated how the framework can be used to obtain levels of carbon emissions reduction while minimizing the negative impact on electricity demand and maximizing green penetration from microgrids. In the aforementioned studies, forecasts for electricity prices and production cost are considered. This, this dissertation also presents anew forecast model that can be easily integrated in the two-layer framework. It is demonstrated in this dissertation that the proposed framework can be utilized by policy-makers, power companies, consumers, and market regulators in developing emissions policy decisions, bidding strategies, market regulations, and electricity dispatch strategies.
154

Strategic political resource allocation

Mastronardi, Nick 28 April 2015 (has links)
Economics is the study of the allocation of resources. Since Arrow's Fundamental Welfare Theorems, we know that competitive-markets achieve Pareto allocations when governments correct market failures. Thus, it has largely been the mission of economists to serve as 'Market Engineers': To identify and quantify market failures so the government can implement Pareto-improving policy (make everyone better without making anyone worse). Do Pareto- improving policies get implemented? How does policy become implemented? Achieving a Pareto efficient allocation of a nation's resources requires studying the implementation of policy, and therefore studying the allocation of political resources that influence policy. Policy implementation begins with the electoral process. In this dissertation, I use auction analysis, econometrics, and game theory to study political resource allocations in the electoral process. This dissertation consists of three research papers: Finance-Augmented Median-Voter Model, Vote Empirics, and Colonel Blotto Strategies. The Finance-Augmented Median-Voter Model postulates that candidates' campaign expenditures are bids in a first-price asymmetric all-pay auction in order to explain campaign expenditure behavior. Vote Empirics empirically analyzes the impacts of campaign expenditures, incumbency status, and district voter registration statistics on observed vote-share results from the 2004 congressional election. Colonel Blotto Strategies postulates that parties' campaign allocations across congressional districts may be a version of the classic Col Blotto game from Game Theory. While some equilibrium strategies and equilibrium payoffs have been identified, this paper completely characterizes players' optimal strategies. In total, this dissertation solves candidates' optimal campaign expenditure strategies when campaign expenditures are bids in an all-pay auction. The analysis demonstrates the need for understanding exactly the impacts of various factors, including strategic expenditures, on final vote results. The research uses econometric techniques to identify the effects. Last, the research derives the complete characterization of Col Blotto strategies. Discussed extensions provide testable predictions for cross-district Party contributions. I present this research not as a final statement to the literature, but in hopes that future research will continue its explanation of political resource allocation. An even greater hope is that in time this literature will be used to identify optimal "policy-influencing policies"; constitutional election policies that provide for the implementation of Pareto-improving government policies. / text
155

Control-friendly scheduling algorithms for multi-tool, multi-product manufacturing systems

Bregenzer, Brent Constant 27 January 2012 (has links)
The fabrication of semiconductor devices is a highly competitive and capital intensive industry. Due to the high costs of building wafer fabrication facilities (fabs), it is expected that products should be made efficiently with respect to both time and material, and that expensive unit operations (tools) should be utilized as much as possible. The process flow is characterized by frequent machine failures, drifting tool states, parallel processing, and reentrant flows. In addition, the competitive nature of the industry requires products to be made quickly and within tight tolerances. All of these factors conspire to make both the scheduling of product flow through the system and the control of product quality metrics extremely difficult. Up to now, much research has been done on the two problems separately, but until recently, interactions between the two systems, which can sometimes be detrimental to one another, have mostly been ignored. The research contained here seeks to tackle the scheduling problem by utilizing objectives based on control system parameters in order that the two systems might behave in a more beneficial manner. A non-threaded control system is used that models the multi-tool, multi-product process in a state space form, and estimates the states using a Kalman filter. Additionally, the process flow is modeled by a discrete event simulation. The two systems are then merged to give a representation of the overall system. Two control system matrices, the estimate error covariance matrix from the Kalman filter and a square form of the system observability matrix called the information matrix, are used to generate several control-based scheduling algorithms. These methods are then tested against more tradition approaches from the scheduling literature to determine their effectiveness on both the basis of how well they maintain the outputs near their targets and how well they minimize the cycle time of the products in the system. The two metrics are viewed simultaneously through use of Pareto plots and merits of the various scheduling methods are judged on the basis of Pareto optimality for several test cases. / text
156

Περιβάλλουσα ανάλυση δεδομένων / Data envelopment analysis

Σαΐττης, Κωνσταντίνος 25 May 2015 (has links)
Η παρούσα διπλωματική εργασία σκοπεύει στην παρουσίαση και ανάλυση της μεθόδου της Περιβάλλουσας Ανάλυσης Δεδομένων, η οποία δημιουργήθηκε για την αξιολόγηση της αποδοτικότητας οργανωτικών μονάδων όπως τα τραπεζικά υποκαταστήματα, σχολεία, νοσοκομεία ή εστιατόρια. Το κλειδί που μας επιτρέπει την σύγκριση αυτών των μονάδων βρίσκετε στους πόρους που χρησιμοποιούν για την παραγωγή έργου. Η Περιβάλλουσα Ανάλυση Δεδομένων (Data Envelopment Analysis) πρωτοπαρουσιάστηκε το 1978 από τους Charnes Cooper και Rhodes σε μία μελέτη τους (Charnes, et al.1978; Cooper 1978; Rhodes 1978). Η μελέτη αναφερόταν σε εκτιμήσεις της αποδοτικότητας μη κερδοσκοπικών οργανισμών και μπορεί δε να θεωρηθεί επέκταση της τεχνικής αποδοτικότητας, δοσμένης από τον Farell το 1957. Στο πρώτο κεφάλαιο παρουσιάζεται το θεωρητικό υπόβαθρο της μεθόδου, τα μοντέλα που κρύβονται πίσω από την μέθοδο, τον τρόπο υπολογισμού της αποδοτικότητας, τις παραδοχές πίσω από το σύνολο δυνατοτήτων παραγωγής, γραφική αναπαράσταση της μεθόδου και ένα παράδειγμα το οποίο είναι συνδιασμός των πιο πάνω. Στο 2ο κεφάλαιο παρουσιάζουμε εναλλακτικά μοντέλα τα οποία είναι προεκτάσεις των βασικών μοντέλων της μεθόδου, αναδεικνύοντας την προσαρμοστικότητα της μεθόδου. Αρκετά από αυτά τα μοντέλα προέκυψαν από την ανάγκη αντιμετώπισης αρκετών ασυνεπειών. Μερικά απο τα μοντέλα που παρουσιάζουμε είναι το προσθετικό μοντέλο, το πολλαπλαστικό μοντέλο και μοντέλα με εξωγενείς και κατηγορικές μεταβλητές. Το προσθετικό μοντέλο σε αντίθεση με τα μοντέλα CCR και BCC έχει την ικανότητα ελαχιστοποίηση των εισροών και μεγιστοποίησης των εκροών ταυτόχρονα. Αυτό ήταν αδύνατο στα μοντέλο CCR και BCC καθώς αυτά μπορούσαν είτε να ελαχιστοποιήσουν της εισροές είτε στην μεγιστοποίηση των εκροών αλλά όχι και τα 2 ταυτόχρονα. Επίσης στο 2ο κεφάλαιο παρουσιάζεται η χρήση απολύτων φραγμάτων στους συντελεστές βαρύτητας ιδιομορφίες των συντελεστών βαρύτητας οι οποίοι συντελούν στην ασυνέπεια της μεθόδου με την ιδιόμορφη συμπεριφορά τους. / This thesis aims at presenting and analyzing the method of Data Envelopment Analysis, which was created to assess the efficiency of organizational units such as bank branches, schools, hospitals and restaurants. The key that enables us to compare these units is the kind of resources they used to produce results. Data Envelopment Analysis was introduced in 1978 by Charnes Cooper and Rhodes in their seminar study (Charnes, et al.1978; Cooper 1978; Rhodes 1978). The paper refers to estimations of the efficiency, of non-profit organizations and may be considered as an extension of technical efficiency, given by Farell 1957. In the first chapter we are presenting the theoretical background of the method, the linear models behind the method, methods for calculating the efficiency, assumptions needed for production possibility set, graphical representations of the process and an example which is a combination all of the above. The second chapter presents alternative models that are extensions of the basic models of the process, highlighting the versatility of the method. Several of those models arose from the need to address a number of inconsistencies. The models we are presenting in these section are the additive model, the extended additive model, the multiplicative model and models with exogenous and categorical variables. The additive model in contrast with the CCR and BCC models have the ability to minimize inputs and maximize outputs simultaneously. This was impossible in the CCR and BCC model as they could either minimize or maximize inputs outputs but not the two simultaneously. Also we present the use of absolute limits on the weight coefficients whose peculiar behavior contribute to some of the inconsistencies observed in the method.
157

Nupjauto lognormaliojo ir Pareto skirstinių mišinio kai kurios savybės / Some properties of truncated lognormal and the pareto distributions mixture

Žuklijaitė, Viktorija 08 September 2009 (has links)
Draudimo matematikoje modeliuojant žalas dažnai naudojami dviejų parametrų lognormalusis ir Pareto skirstiniai. Lognormalusis skirstinys taikomas mažoms žaloms su dideliu dažniu aprašyti, o Pareto – didelėms su mažu. Siekiant, kad skirstinys vienodai gerai aprašytu visų tipų žalas, sudaromas nupjauto lognormaliojo ir Pareto skirstinių mišinys su trimis laisvais parametrais. Šis darbas parašytas remiantis Kahadawala Cooray ir Malwane M. A. Ananda straipsniu "Modeling actuarial data with a composite lognormal – Pareto model" ("Scandinavian Actuarial Journal", 2005, 5, 321 - 334 psl.), kuriame nagrinėjamas sudėtinis lognormalusis – Pareto skirstinys. Darbe nagrinėjami lognormalusis ir Pareto skirstiniai, aptariama, kodėl jie turėtų būti naudojami kartu kaip mišinys, pateikiamas nupjauto lognormaliojo ir Pareto skirstinių mišinio tankio funkcijos išvedimas, grafiškai iliustruojamas jos kitimas, priklausomai nuo parametrų parinkimo. Praktinėje dalyje nagrinėjamos trys imtys: sudėtiniu lognormaliuoju – Pareto skirstiniu modeliuoti duomenys, vienos Lietuvos draudimo bendrovės draudimo nuo nelaimingų atsitikimų žalos ir Danijos gaisrų draudimo žalos. Didžiausio tikėtinumo metodu skaitiškai įvertinami mišinio parametrai kiekvienai imčiai, gauti rezultatai lyginami su sudėtinio lognormaliojo – Pareto, lognormaliojo, Pareto, Gama, Weibull skirstinių didžiausio tikėtinumo įverčiais. Palyginimui naudojami Kolmogorovo – Smirnovo, Andersono – Darlingo kriterijai ir chi kvadrato suderinamumo... [toliau žr. visą tekstą] / In insurance mathematics are often used the lognormal ant the Pareto distributions with two parameters to model loss data. The lognormal distribution is used to model small data with higher frequencies, while the Pareto distribution is used to model large data with low frequencies. In order to achieve both of these losses in one model, the truncated lognormal and the Pareto distributions mixture with three parameter is presented. This work is written with reference to Kahadawala Cooray ir Malwane M. A. Ananda article "Modeling actuarial data with a composite lognormal – Pareto model" ("Scandinavian Actuarial Journal", 2005, 5, 321 - 334 pages), where composite lognormal – Pareto model is researched. In this work the necessity of the lognormal and the Pareto distributions mixture is discussed, the derivation of the truncated lognormal and the Pareto distributions mixture model are presented and behaviour of density function in dependent of parameter variation is discussed by illustrating. For practical application three data sets are chosen: simulated from the composite lognormal – Pareto distribution, personal accident insurance loss of one Lithuania insurance company, Danish fire loss data. For the each of data sets the parameters are estimated using maximum likelihood function, resulted estimators are compared with maximum likelihood estimators of composite lognormal – Pareto, lognormal, Pareto, Gamma and Weibull distributions. In order to compare the models the following... [to full text]
158

Optimization of industrial shop scheduling using simulation and fuzzy logic

Rokni, Sima Unknown Date
No description available.
159

Analyse statistique de la pauvreté et des inégalités

Diouf, Mame Astou January 2008 (has links)
Thèse numérisée par la Division de la gestion de documents et des archives de l'Université de Montréal
160

Methods for Composing Tradeoff Studies under Uncertainty

Bily, Christopher 2012 August 1900 (has links)
Tradeoff studies are a common part of engineering practice. Designers conduct tradeoff studies in order to improve their understanding of how various design considerations relate to one another. Generally a tradeoff study involves a systematic multi-criteria evaluation of various alternatives for a particular system or subsystem. After evaluating these alternatives, designers eliminate those that perform poorly under the given criteria and explore more carefully those that remain. The capability to compose preexisting tradeoff studies is advantageous to the designers of engineered systems, such as aircraft, military equipment, and automobiles. Such systems are comprised of many subsystems for which prior tradeoff studies may exist. System designers conceivably could explore system-level tradeoffs more quickly by leveraging this knowledge. For example, automotive systems engineers could combine tradeoff studies from the engine and transmission subsystems quickly to produce a comprehensive tradeoff study for the power train. This level of knowledge reuse is in keeping with good systems engineering practice. However, existing procedures for generating tradeoff studies under uncertainty involve assumptions that preclude engineers from composing them in a mathematically rigorous way. In uncertain problems, designers can eliminate inferior alternatives using stochastic dominance, which compares the probability distributions defined in the design criteria space. Although this is well-founded mathematically, the procedure can be computationally expensive because it typically entails a sampling-based uncertainty propagation method for each alternative being considered. This thesis describes two novel extensions that permit engineers to compose preexisting subsystem-level tradeoff studies under uncertainty into mathematically valid system-level tradeoff studies and efficiently eliminate inferior alternatives through intelligent sampling. The approaches are based on three key ideas: the use of stochastic dominance methods to enable the tradeoff evaluation when the design criteria are uncertain, the use of parameterized efficient sets to enable reuse and composition of subsystem-level tradeoff studies, and the use of statistical tests in dominance testing to reduce the number of behavioral model evaluations. The approaches are demonstrated in the context of a tradeoff study for a motor vehicle.

Page generated in 0.2557 seconds