• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 171
  • 54
  • 50
  • 49
  • 10
  • 8
  • 8
  • 6
  • 5
  • 5
  • 5
  • 3
  • 3
  • 2
  • 2
  • Tagged with
  • 447
  • 95
  • 73
  • 71
  • 66
  • 56
  • 46
  • 43
  • 43
  • 38
  • 37
  • 33
  • 32
  • 32
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Optimal Reduced Size Choice Sets with Overlapping Attributes

Huang, Ke January 2015 (has links)
Discrete choice experiments are used when choice alternatives can be described in terms of attributes. The objective is to infer the value that respondents attach to attribute levels. Respondents are presented sets of profiles based on attributes specified at certain levels and asked to select the profile they consider best. When the number of attributes or attribute levels becomes large, the profiles in a single choice set may be too numerous for respondents to make precise decisions. One strategy for reducing the size of choice sets is the sub-setting of attributes. However, the optimality of these reduced size choice sets has not been examined in the literature. We examine the optimality of reduced size choice sets for 2^n experiments using information per profile (IPP) as the optimality criteria. We propose a new approach for calculating the IPP of designs obtained by dividing attributes into two or more subsets with one, two, and in general, r overlapping attributes, and compare the IPP of the reduced size designs with the original full designs. Next we examine the IPP of choice designs based on 3^n factorial experiments. We calculate the IPP of reduced size designs obtained by sub-setting attributes in 3^n plans and compare them to the original full designs. / Statistics
202

Some Results on Pareto Optimal Choice Sets for Estimating Main Effects and Interactions in 2n and 3n Factorial Plans

Xiao, Jing January 2015 (has links)
Choice-based conjoint experiments are used when choice alternatives can be described in terms of attributes. The objective is to infer the value that respondents attach to attribute levels. This method involves the design of profiles on the basis of attributes specified at certain levels. Respondents are presented sets of profiles called choice sets, and asked to select the one they consider best. Sets with no dominating or dominated profiles are called Pareto Optimal sets. Information Per Profile (IPP) is used as an optimality criteria to compare designs with different numbers of profiles. For a 2^n experiment, the optimality of connected main effects plans based on two consecutive choice sets, Sl and Sl+1, has been examined in the literature. In this thesis we examine the IPP of both consecutive and non-consecutive choice sets and show that IPP can be maximized under certain conditions. We show that non-consecutive choice sets have higher IPP than consecutive choice sets for n ≥ 4. We also examine the optimality of connected first-order-interaction designs based on three choice sets and show that non-consecutive choice sets have higher IPP than consecutive choice sets under certain conditions. Further, we examine the D-, A- and E-optimality of consecutive and non-consecutive PO choice sets with maximum IPP. Finally, we consider 3^n choice experiments. We look for the optimal PO choice sets and examine their IPP, D-, A- and E-optimality, as well as comparing consecutive and non-consecutive choice sets. / Statistics
203

Strategier För Skrotminimering: En Fallstudie På Alfa

Michael, San, Shanshal, Faisal January 2024 (has links)
Datum:  31 maj 2024 Nivå:   Examensarbete på civilingenjörsprogrammet inom industriell ekonomi Författare:   San Michael Faisal Shanshal Titel:    Strategier för skrotminimering: En fallstudie på Alfa Handledare:   Sofia Wagrell Nyckelord:   Skrot, skrotminimering, Total Quality Management, TQM, kvalitet,  Pareto-analys, hållbarhet Frågeställningar:  1. Vilka är de betydande orsakerna till skrot inom Alfa?    2. Vilka specifika åtgärder och strategier kan Alfa implementera inom      sina produktionsprocesser för att effektivt minimera skrot? Syfte: Syftet med denna studie är att undersöka orsakerna till skrotbildning för att sedan utveckla förslag på strategier för att minimera skrot. Arbetet siktar på att skapa en hållbar verksamhet som även är resurseffektiv genom att identifiera och analysera orsakerna till skrot samt föreslå de mest lämpliga åtgärder. Denna mer hållbara verksamhet bidrar i sin tur till minskad miljöpåverkan och ekonomiskt ansvarsfull produktion. Med ett fokus på minimering av skrot hoppas arbetet kunna främja kostnadsbesparingar och skapa långsiktig hållbarhet i företagets verksamhet. Metod:  För att besvara studiens frågeställningar och uppnå studiens syfte tillämpas en kombination av kvantitativ och kvalitativ metod med en abduktiv ansats. Den kvantitativa datan samlades in genom en Microsoft Power BI rapport där mönster, samband och trender kunde identifieras. Den kvalitativa datan samlades in genom semistrukturerade intervjuer för att ta del av uppfattningar, erfarenheter och åsikter.  Slutsats:  De främsta orsakerna till skrot inom Alfa identifierades som bristande kommunikation inom verksamheten och med leverantörer, dålig kvalitet på material från leverantörer samt ineffektiva inspektions- och dokumenteringsprocesser. Dessa faktorer leder till ökad skrotning, förseningar i produktionen och risk för att defekta produkter når kunderna. Först och främst är förbättring av intern kommunikation avgörande. Ett steg mot förbättring är att införa stoppmöten där företaget snabbt kan identifiera och åtgärda orsaker till produktionsstopp och skrotning, vilket samtidigt förbättrar samarbetet mellan olika avdelningar. Vidare är det viktigt att förstärka kommunikationen med leverantörer genom att skapa tydliga kommunikationskanaler och kvalitetskrav, vilket säkerställer att material av hög kvalitet används i produktionen. Kontinuerlig förbättring och teknologisk uppdatering är också centrala förbättringar. Implementeringen av ny teknologi och upprätthållningen av strikta inspektionsprocesser kan leda till att Alfa säkerställer att endast felfria komponenter används i produktionen. Effektiv dokumentation är nödvändig för att analysera orsaker till skrot och fatta faktabaserade beslut, vilket hjälper företag att systematiskt minska skrot. Slutligen kan främjandet av återanvändning, genom att etablera utbildningsprogram och reparationscenter, bidra till att defekta komponenter återanvänds istället för att skrotas, vilket minskar avfall och kostnader. Med hjälp av dessa specifika åtgärder och strategier kan Alfa inte bara minska sina produktionskostnader utan förhoppningsvis också bidra till en mer hållbar och ekonomiskt ansvarsfull verksamhet, vilket ligger i linje med studiens syfte att skapa en resurseffektiv och miljövänlig produktion.
204

Development and Applications of Multi-Objectives Signal Control Strategy during Oversaturated Conditions

Adam, Zaeinulabddin Mohamed Ahmed 28 September 2012 (has links)
Managing traffic during oversaturated conditions is a current challenge for practitioners due to the lack of adequate tools that can handle such situations. Unlike under-saturated conditions, operation of traffic signal systems during congestion requires careful consideration and analysis of the underlying causes of the congestion before developing mitigation strategies. The objectives of this research are to provide a practical guidance for practitioners to identify oversaturated scenarios and to develop a multi-objective methodology for selecting and evaluating mitigation strategy/ or combinations of strategies based on a guiding principles. The research focused on traffic control strategies that can be implemented by traffic signal systems. The research did not considered strategies that deals with demand reduction or seek to influence departure time choice, or route choice. The proposed timing methodology starts by detecting network's critical routes as a necessary step to identify the traffic patterns and potential problematic scenarios. A wide array of control strategies are defined and categorized to address oversaturation problematic scenarios. A timing procedure was then developed using the principles of oversaturation timing in cycle selection, split allocation, offset design, demand overflow, and queue allocation in non-critical links. Three regimes of operation were defined and considered in oversaturation timing: (1) loading, (2) processing, and (3) recovery. The research also provides a closed-form formula for switching control plans during the oversaturation regimes. The selection of optimal control plan is formulated as linear integer programming problem. Microscopic simulation results of two arterial test cases revealed that traffic control strategies developed using the proposed framework led to tangible performance improvements when compared to signal control strategies designed for operations in under-saturated conditions. The generated control plans successfully manage to allocate queues in network links. / Ph. D.
205

Adaptive Asymmetric Slot Allocation for Heterogeneous Traffic in WCDMA/TDD Systems

Park, JinSoo 29 November 2004 (has links)
Even if 3rd and 4th generation wireless systems aim to achieve multimedia services at high speed, it is rather difficult to have full-fledged multimedia services due to insufficient capacity of the systems. There are many technical challenges placed on us in order to realize the real multimedia services. One of those challenges is how efficiently to allocate resources to traffic as the wireless systems evolve. The review of the literature shows that strategic manipulation of traffic can lead to an efficient use of resources in both wire-line and wireless networks. This aspect brings our attention to the role of link layer protocols, which is to orchestrate the transmission of packets in an efficient way using given resources. Therefore, the Media Access Control (MAC) layer plays a very important role in this context. In this research, we investigate technical challenges involving resource control and management in the design of MAC protocols based on the characteristics of traffic, and provide some strategies to solve those challenges. The first and foremost matter in wireless MAC protocol research is to choose the type of multiple access schemes. Each scheme has advantages and disadvantages. We choose Wireless Code Division Multiple Access/Time Division Duplexing (WCDMA/TDD) systems since they are known to be efficient for bursty traffic. Most existing MAC protocols developed for WCDMA/TDD systems are interested in the performance of a unidirectional link, in particular in the uplink, assuming that the number of slots for each link is fixed a priori. That ignores the dynamic aspect of TDD systems. We believe that adaptive dynamic slot allocation can bring further benefits in terms of efficient resource management. Meanwhile, this adaptive slot allocation issue has been dealt with from a completely different angle. Related research works are focused on the adaptive slot allocation to minimize inter-cell interference under multi-cell environments. We believe that these two issues need to be handled together in order to enhance the performance of MAC protocols, and thus embark upon a study on the adaptive dynamic slot allocation for the MAC protocol. This research starts from the examination of key factors that affect the adaptive allocation strategy. Through the review of the literature, we conclude that traffic characterization can be an essential component for this research to achieve efficient resource control and management. So we identify appropriate traffic characteristics and metrics. The volume and burstiness of traffic are chosen as the characteristics for our adaptive dynamic slot allocation. Based on this examination, we propose four major adaptive dynamic slot allocation strategies: (i) a strategy based on the estimation of burstiness of traffic, (ii) a strategy based on the estimation of volume and burstiness of traffic, (iii) a strategy based on the parameter estimation of a distribution of traffic, and (iv) a strategy based on the exploitation of physical layer information. The first method estimates the burstiness in both links and assigns the number of slots for each link according to a ratio of these two estimates. The second method estimates the burstiness and volume of traffic in both links and assigns the number of slots for each link according to a ratio of weighted volumes in each link, where the weights are driven by the estimated burstiness in each link. For the estimation of burstiness, we propose a new burstiness measure that is based on a ratio between peak and median volume of traffic. This burstiness measure requires the determination of an observation window, with which the median and the peak are measured. We propose a dynamic method for the selection of the observation window, making use of statistical characteristics of traffic: Autocorrelation Function (ACF) and Partial ACF (PACF). For the third method, we develop several estimators to estimate the parameters of a traffic distribution and suggest two new slot allocation methods based on the estimated parameters. The last method exploits physical layer information as another way of allocating slot to enhance the performance of the system. The performance of our proposed strategies is evaluated in various scenarios. Major simulations are categorized as: simulation on data traffic, simulation on combined voice and data traffic, simulation on real trace data. The performance of each strategy is evaluated in terms of throughput and packet drop ratio. In addition, we consider the frequency of slot changes to assess the performance in terms of control overhead. We expect that this research work will add to the state of the knowledge in the field of link-layer protocol research for WCDMA/TDD systems. / Ph. D.
206

Multi-Objective Design Optimization Considering Uncertainty in a Multi-Disciplinary Ship Synthesis Model

Good, Nathan Andrew 17 August 2006 (has links)
Multi-disciplinary ship synthesis models and multi-objective optimization techniques are increasingly being used in ship design. Multi-disciplinary models allow designers to break away from the traditional design spiral approach and focus on searching the design space for the best overall design instead of the best discipline-specific design. Complex design problems such as these often have high levels of uncertainty associated with them, and since most optimization algorithms tend to push solutions to constraint boundaries, the calculated "best" solution might be infeasible if there are minor uncertainties related to the model or problem definition. Consequently, there is a need to address uncertainty in optimization problems to produce effective and reliable results. This thesis focuses on adding a third objective, uncertainty, to the effectiveness and cost objectives already present in a multi-disciplinary ship synthesis model. Uncertainty is quantified using a "confidence of success" (CoS) calculation based on the mean value method. CoS is the probability that a design will satisfy all constraints and meet performance objectives. This work proves that the CoS concept can be applied to synthesis models to estimate uncertainty early in the design process. Multiple sources of uncertainty are realistically quantified and represented in the model in order to investigate their relative importance to the overall uncertainty. This work also presents methods to encourage a uniform distribution of points across the Pareto front. With a well defined front, designs can be selected and refined using a gradient based optimization algorithm to optimize a single objective while holding the others fixed. / Master of Science
207

Evaluation Techniques for Mapping IPs on FPGAs

Lakshminarayana, Avinash 01 September 2010 (has links)
The phenomenal density growth in semiconductors has resulted in the availability of billions of transistors on a single die. The time-to-design is shrinking continuously due to aggressive competition. Also, the integration of many discrete components on a single chip is growing at a rapid pace. Designing such heterogeneous systems in short duration is becoming difficult with existing technology. Field-Programmable Gate Arrays offer a good alternative in both productivity and heterogeneity issues. However, there are many obstacles that need to be addressed to make them a viable option. One such obstacle is the lack of early design space exploration tools and techniques for FPGA designs. This thesis develops techniques to evaluate systematically, the available design options before the actual system implementation. The aspect which makes this problem interesting, yet complicated, is that a system-level optimization is not linearly summable. The discrete components of a system, benchmarked as best in all design parameters — speed, area and power, need not add up to the best possible system. This work addresses the problem in two ways. In the first approach, we demonstrate that by working at higher levels of abstraction, one can achieve orders of improvement in productivity. Designing a system directly from its behavioral description is an on-going effort in industry. Instead of focusing on design aspects, we use these methods to develop quick prototypes and estimate the design parameters. Design space exploration needs relative comparison among available choices and not accurate values of design parameters. It is shown that the proposed method can do an acceptable job in this regard. The second approach is about evolving statistical techniques for estimating the design parameters and then algorithmically searching the design space. Specifically, a high level power estimation model is developed for FPGA designs. While existing techniques develop power model for discrete components separately, this work evaluates the option of generic power model for multiple components. / Master of Science
208

Managing performance expectations in association football

Fry, John, Serbera, J-P., Wilson, R.J. 10 August 2021 (has links)
Yes / Motivated by excessive managerial pressure and sackings, together with associated questions over the inefficient use of scarce resources, we explore realistic performance expectations in association football. Our aim is to improve management quality by accounting for information asymmetry. Results highlight uncertainty caused both by football’s low-scoring nature and the intensity of the competition. At a deeper level we show that fans and journalists are prone to under-estimate uncertainties associated with individual matches. Further, we quantify reasonable expectations in the face of unevenly distributed resources. In line with the statactivist approach we call for more rounded assessments to be made once the underlying uncertainties are adequately accounted for. Managing fan expectations is probably impossible though the potential for constructive dialogue remains.
209

Developing a Decision Making Approach for District Cooling Systems Design using Multi-objective Optimization

Kamali, Aslan 18 August 2016 (has links) (PDF)
Energy consumption rates have been dramatically increasing on a global scale within the last few decades. A significant role in this increase is subjected by the recent high temperature levels especially at summer time which caused a rapid increase in the air conditioning demands. Such phenomena can be clearly observed in developing countries, especially those in hot climate regions, where people depend mainly on conventional air conditioning systems. These systems often show poor performance and thus negatively impact the environment which in turn contributes to global warming phenomena. In recent years, the demand for urban or district cooling technologies and networks has been increasing significantly as an alternative to conventional systems due to their higher efficiency and improved ecological impact. However, to obtain an efficient design for district cooling systems is a complex task that requires considering a wide range of cooling technologies, various network layout configuration possibilities, and several energy resources to be integrated. Thus, critical decisions have to be made regarding a variety of opportunities, options and technologies. The main objective of this thesis is to develop a tool to obtain preliminary design configurations and operation patterns for district cooling energy systems by performing roughly detailed optimizations and further, to introduce a decision-making approach to help decision makers in evaluating the economic aspects and environmental performance of urban cooling systems at an early design stage. Different aspects of the subject have been investigated in the literature by several researchers. A brief survey of the state of the art was carried out and revealed that mathematical programming models were the most common and successful technique for configuring and designing cooling systems for urban areas. As an outcome of the survey, multi objective optimization models were decided to be utilized to support the decision-making process. Hence, a multi objective optimization model has been developed to address the complicated issue of decision-making when designing a cooling system for an urban area or district. The model aims to optimize several elements of a cooling system such as: cooling network, cooling technologies, capacity and location of system equipment. In addition, various energy resources have been taken into consideration as well as different solar technologies such as: trough solar concentrators, vacuum solar collectors and PV panels. The model was developed based on the mixed integer linear programming method (MILP) and implemented using GAMS language. Two case studies were investigated using the developed model. The first case study consists of seven buildings representing a residential district while the second case study was a university campus district dominated by non-residential buildings. The study was carried out for several groups of scenarios investigating certain design parameters and operation conditions such as: Available area, production plant location, cold storage location constraints, piping prices, investment cost, constant and variable electricity tariffs, solar energy integration policy, waste heat availability, load shifting strategies, and the effect of outdoor temperature in hot regions on the district cooling system performance. The investigation consisted of three stages, with total annual cost and CO2 emissions being the first and second single objective optimization stages. The third stage was a multi objective optimization combining the earlier two single objectives. Later on, non-dominated solutions, i.e. Pareto solutions, were generated by obtaining several multi objective optimization scenarios based on the decision-makers’ preferences. Eventually, a decision-making approach was developed to help decision-makers in selecting a specific solution that best fits the designers’ or decision makers’ desires, based on the difference between the Utopia and Nadir values, i.e. total annual cost and CO2 emissions obtained at the single optimization stages. / Die Energieverbrauchsraten haben in den letzten Jahrzehnten auf globaler Ebene dramatisch zugenommen. Diese Erhöhung ist zu einem großen Teil in den jüngst hohen Temperaturniveaus, vor allem in der Sommerzeit, begründet, die einen starken Anstieg der Nachfrage nach Klimaanlagen verursachen. Solche Ereignisse sind deutlich in Entwicklungsländern zu beobachten, vor allem in heißen Klimaregionen, wo Menschen vor allem konventionelle Klimaanlagensysteme benutzen. Diese Systeme verfügen meist über eine ineffiziente Leistungsfähigkeit und wirken sich somit negativ auf die Umwelt aus, was wiederum zur globalen Erwärmung beiträgt. In den letzten Jahren ist die Nachfrage nach Stadt- oder Fernkältetechnologien und -Netzwerken als Alternative zu konventionellen Systemen aufgrund ihrer höheren Effizienz und besseren ökologischen Verträglichkeit satrk gestiegen. Ein effizientes Design für Fernkühlsysteme zu erhalten, ist allerdings eine komplexe Aufgabe, die die Integration einer breite Palette von Kühltechnologien, verschiedener Konfigurationsmöglichkeiten von Netzwerk-Layouts und unterschiedlicher Energiequellen erfordert. Hierfür ist das Treffen kritischer Entscheidungen hinsichtlich einer Vielzahl von Möglichkeiten, Optionen und Technologien unabdingbar. Das Hauptziel dieser Arbeit ist es, ein Werkzeug zu entwickeln, das vorläufige Design-Konfigurationen und Betriebsmuster für Fernkälteenergiesysteme liefert, indem aureichend detaillierte Optimierungen durchgeführt werden. Zudem soll auch ein Ansatz zur Entscheidungsfindung vorgestellt werden, der Entscheidungsträger in einem frühen Planungsstadium bei der Bewertung städtischer Kühlungssysteme hinsichtlich der wirtschaftlichen Aspekte und Umweltleistung unterstützen soll. Unterschiedliche Aspekte dieser Problemstellung wurden in der Literatur von verschiedenen Forschern untersucht. Eine kurze Analyse des derzeitigen Stands der Technik ergab, dass mathematische Programmiermodelle die am weitesten verbreitete und erfolgreichste Methode für die Konfiguration und Gestaltung von Kühlsystemen für städtische Gebiete sind. Ein weiteres Ergebnis der Analyse war die Festlegung von Mehrzieloptimierungs-Modelles für die Unterstützung des Entscheidungsprozesses. Darauf basierend wurde im Rahmen der vorliegenden Arbeit ein Mehrzieloptimierungs-Modell für die Lösung des komplexen Entscheidungsfindungsprozesses bei der Gestaltung eines Kühlsystems für ein Stadtgebiet oder einen Bezirk entwickelt. Das Modell zielt darauf ab, mehrere Elemente des Kühlsystems zu optimieren, wie beispielsweise Kühlnetzwerke, Kühltechnologien sowie Kapazität und Lage der Systemtechnik. Zusätzlich werden verschiedene Energiequellen, auch solare wie Solarkonzentratoren, Vakuum-Solarkollektoren und PV-Module, berücksichtigt. Das Modell wurde auf Basis der gemischt-ganzzahlig linearen Optimierung (MILP) entwickelt und in GAMS Sprache implementiert. Zwei Fallstudien wurden mit dem entwickelten Modell untersucht. Die erste Fallstudie besteht aus sieben Gebäuden, die ein Wohnviertel darstellen, während die zweite Fallstudie einen Universitätscampus dominiert von Nichtwohngebäuden repräsentiert. Die Untersuchung wurde für mehrere Gruppen von Szenarien durchgeführt, wobei bestimmte Designparameter und Betriebsbedingungen überprüft werden, wie zum Beispiel die zur Verfügung stehende Fläche, Lage der Kühlanlage, örtliche Restriktionen der Kältespeicherung, Rohrpreise, Investitionskosten, konstante und variable Stromtarife, Strategie zur Einbindung der Solarenergie, Verfügbarkeit von Abwärme, Strategien der Lastenverschiebung, und die Wirkung der Außentemperatur in heißen Regionen auf die Leistung des Kühlsystems. Die Untersuchung bestand aus drei Stufen, wobei die jährlichen Gesamtkosten und die CO2-Emissionen die erste und zweite Einzelzieloptimierungsstufe darstellen. Die dritte Stufe war ein Pareto-Optimierung, die die beiden ersten Ziele kombiniert. Im Anschluss wurden nicht-dominante Lösungen, also Pareto-Lösungen, erzeugt, indem mehrere Pareto-Optimierungs-Szenarien basierend auf den Präferenzen der Entscheidungsträger abgebildet wurden. Schließlich wurde ein Ansatz zur Entscheidungsfindung entwickelt, um Entscheidungsträger bei der Auswahl einer bestimmten Lösung zu unterstützen, die am besten den Präferenzen des Planers oder des Entscheidungsträgers enstpricht, basierend auf der Differenz der Utopia und Nadir Werte, d.h. der jährlichen Gesamtkosten und CO2-Emissionen, die Ergebnis der einzelnen Optimierungsstufen sind.
210

The Double Pareto-Lognormal Distribution and its applications in actuarial science and finance

Zhang, Chuan Chuan 01 1900 (has links)
Le but de ce mémoire de maîtrise est de décrire les propriétés de la loi double Pareto-lognormale, de montrer comment on peut introduire des variables explicatives dans le modèle et de présenter son large potentiel d'applications dans le domaine de la science actuarielle et de la finance. Tout d'abord, nous donnons la définition de la loi double Pareto-lognormale et présentons certaines de ses propriétés basées sur les travaux de Reed et Jorgensen (2004). Les paramètres peuvent être estimés en utilisant la méthode des moments ou le maximum de vraisemblance. Ensuite, nous ajoutons une variable explicative à notre modèle. La procédure d'estimation des paramètres de ce mo-\\dèle est également discutée. Troisièmement, des applications numériques de notre modèle sont illustrées et quelques tests statistiques utiles sont effectués. / The purpose of this Master's thesis is to describe the double Pareto-lognormal distribution, show how the model can be extended by introducing explanatory variables in the model and present its large potential of applications in actuarial science and finance. First, we give the definition of the double Pareto-lognormal distribution and present some of its properties based on the work of Reed and Jorgensen (2004). The parameters could be estimated by using the method of moments or maximum likelihood. Next, we add an explanatory variable to our model. The procedure of estimation for this model is also discussed. Finally, some numerical applications of our model are illustrated and some useful statistical tests are conducted.

Page generated in 0.0539 seconds