• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 15
  • 4
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 32
  • 11
  • 8
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Fuzzy C-Means Clustering Approach to Design a Warehouse Layout

Naik, Vaibhav C 08 July 2004 (has links)
Allocation of products in a warehouse is done by various storage policies. These are broadly classified into three main categories: dedicated storage, randomized storage, and class-based storage. In dedicated storage policy a product is assigned a designated slot while in random storage policy incoming product is randomly assigned a storage location close to the input/output point. Finally, the class-based storage is a mixed policy where products are randomly assigned within their fixed class. Dedicated storage policy is most commonly used in practice. While designing large warehouse layout, the product information in terms of throughput and storage level is either uncertain or is not available to the warehouse designer. Hence it is not possible to locate products on the basis of the throughput to storage ratio method used in the above mentioned storage location policies. To take care of this uncertainty in product data we propose a fuzzy C-means clustering (FCM) approach. This research is mainly directed to improve the efficiency (distance or time traveled) by designing a fuzzy logic based warehouse with large number of products. The proposed approach looks for similarity in the product data to form clusters. The obtained clusters can be directly utilized to develop the warehouse layout. Further, it is investigated if the FCM approach can take into account other factors such as product size, similarity and/or characteristics to generate layouts which are not only efficient in terms of reducing distance traveled to store/retrieve products but are effective in terms of retrieval time, space utilization and/or better material control.
22

Feldetektering för ett smart elnät i Stockholms skärgård : Undersökning av lämpliga metoder och parametrar för detektering av jord- och kortslutningsfel i ett mellanspänningsdistributionsnät på Ljusterö med omnejd / Fault detection for a smart grid in the Stockholm archipelago : Study of suitible methods and parameters for ground and phase fault detection in a mediumvoltage distribution grid on Ljusterö and vicinity

Söderström, Joel January 2016 (has links)
Den här rapporten utreder förutsättningarna för att konstruera om ett mindre distributionsnät på 10 kV i Stockholms skärgård till ett så kallat smart nät med självläkande funktion. Nätet är avlägset och har haft många fel de senaste åren, därför är det önskvärt att kunna lokalisera fel i nätet och automatiskt koppla om till rundmatning direkt ifrån driftcentralen för att minska avbrottstiden för de boende på öarna. Rapporten fokuserar speciellt på att utreda lämpligheten med att använda feldetekteringsutrustningen SICAM FCM i nätet och hur dessa bör ställas in. Information om det aktuella nätet och problemet beskrivs tillsammans med en teoretisk bakgrund. Resultat ifrån tester och felsimuleringar på SICAM FCM med fokus på känslighet kring tröskelvärden presenteras och analyseras. Utifrån testresultat och information om nätet har rekommenderade inställningar för SICAM FCM tillsammans med andra rekommendationer för implementeringen av det självläkande nätet tagits fram. / This report has investigated the conditions for reconstructing a small 10 kV distribution grid in the Stockholm archipelago into a so called smart grid with a self-healing function. The grid is in a remote location and have had a lot of faults during the last years, therefore it is desirable to be able to locate the fault and automatically section it and restore service through a back-up feeder (if needed) to reduce downtime for customers. This report focuses especially on the suitability to use the fault detection equipment SICAM FCM in the grid and to propose how it should be configured. Information about the grid and problem is described together with a theoretical background. The results from tests and fault simulations on the SICAM FCM focused on sensibility around threshold values is presented and analysed. From test results and information about the grid recommended settings for SICAM FCM together with other recommendations for implementation of the self-healing grid has been produced.
23

Organizational Energy: A Behavioral Analysis of Human and Organizational Factors in Manufacturing

Irani, Zahir, Sharif, Amir M., Papadopoulos, T. 06 March 2015 (has links)
Yes / This paper seeks to explore the behavior and embodied energy involved in the decision-making of information technology/information systems (IT/IS) investments using a case within a small- to medium-sized manufacturing firm. By analyzing decision making within a given case context, this paper describes the nature of the investment through the lens of behavioral economics, causality, input-output (IO) equilibrium, and the general notion of depletion of executive energy function. To explore the interplay between these elements, the authors structure the case context via a morphological field in order to construct a fuzzy cognitive map of decision-making relationships relating to the multidimensional and nonquantifiable problems of IT/IS investment evaluation. Noting the significance of inputs and outputs relating to the investment decision within the case, the authors assess these cognitive interrelationships through the lens of the Leontief IO energy equilibrium model. Subsequently, the authors suggest, through an embodied energy audit, that all such management decisions are susceptible to decision fatigue (so-called “ego depletion”). The findings of this paper highlight pertinent cognitive and IO paths of the investment decision-making process that will allow others making similar types of investments to learn from and draw parallels from such processes.
24

期貨經紀商與客戶間法律關係之研究 / Futures Commission Merchants and Customers

蔡惠如, Tsai, Huei Ju Unknown Date (has links)
期貨交易之集中市場自一八四八年芝加哥期貨交易所(Chicago Board of Trade,CBOT)成立以來,不論交易商品種類及合約成交量均日漸成長,並提供避險者及投機者規避價格變動風險及賺取價差之機會。學說與實務不斷地發展與研究,於法律規範與經濟分析之面向亦臻成熟圓融之境界。反觀國內,此套精密設計的交易制度並未導入正軌,而淪為體制外交易,地下期貨公司濫設,假期貨交易之名行之多年,加以國人不成熟之投機心態,整個期貨市場上瀰漫賭博之氣氛,與國外期貨契約市場相差甚鉅,單憑數簡陋之行政規則實無法收取任何成效,如何導正此不正常現象,實乃當務之急。幸而,立法院於民國八十一年七月十日三讀通過「國外期貨交易法」,始賦予期貨交易合法之法律上地位;至八十三年四月,許多期貨經紀商正式營運,樹立我國期貨交易制度嶄新的里程碑。本論文共分五章,茲簡述各章內容:第一章「緒論」,述及本文之研究動機與研究目的,主要是有鑑於國外期貨交易法之公布,提供國人從事國外期貨交易之準據,擬探討期貨經紀商與客戶間之法律關係。第二章「期貨交易制度之簡介」,介紹期貨交易之沿革、意義、流程、法律性質與當事人,期能就期貨交易制度有概括之輪廓。第三章「期貨經紀商」,主要針對期貨經紀商之意義、期貨經紀商與客戶間之法律關係與期貨仲裁制度,配合美國法之相關規定,斟酌國外期貨交易法之各項問題。第四章「期貨經紀商之民事法律責任」,以損害賠償責任為對象,比較我國法(包括國外期貨交易法及民法)與美國法之差異。第五章「結論」,綜合歸納上述四章之見解,扼要地整理總結。
25

Flicker Source Identification At A Point Of Common Coupling Of The Power System

Altintas, Erinc 01 June 2010 (has links) (PDF)
Voltage fluctuations under 30 Hz in the electricity grid, leads to oscillations in the light intensity that can be perceived by human eye, which is called flicker. In this thesis, the sources of the flicker at a point of common coupling is investigated. When there are more than one flicker sources connected to a PCC, individual effects of each flicker source is determined by using a new method which depends on the reactive current components of the sources. This method is mainly based on the flickermeter design defined by the International Electrotechnical Commission (IEC), but uses the current variations in addition to the voltage variations to compute flicker. The proposed method is applied to several different types of loads supplied from a PCC and their flicker contributions on the busbar are investigated. Experiments are performed on field data obtained by the power quality analyzers (PQ+) developed by the National Power Quality Project and the method has been found to provide accurate results for flicker contributions of various loads. The PQ+ analyzers with the proposed flicker contribution detection algorithm are called Flicker Contribution Meters (FCM) and they will be installed at the points of the Turkish Electricity Transmission Network when required.
26

Fracture Of Plain Concrete Beams Via Fractals

Renuka Devi, M V 11 1900 (has links)
The quantitative description of rough fracture surfaces of concrete has been an important challenge for many years. Looking at the fracture surface of a concrete specimen, one realizes that the self-affine geometry of crack faces results from the stochastic nature of the crack growth. This is due to the heterogeneous nature of concrete that makes the crack tortuous leading its way through weak bonds, voids, mortar and getting arrested on encountering a hard aggregate forming crack face bridges. These mechanisms contribute to the tendency of the crack to follow a tortuous path. The self-similarity contained in the tortuous fracture surface of concrete makes it an ideal candidate to be considered as a fractal. Further, the softening response itself has been treated as a singular fractal function by earlier investigators. The very process of cracking and microcracking, could be considered very close to the stick and slip process and therefore as a fractal. Therefore modeling a crack as a fractal and characterizing it by a fractal dimension have become the focus of research in recent years. Due to randomly distributed discontinuous flaws and high heterogeneity of the internal structure of concrete, mechanical properties also randomly vary. Under the effect of the same external force, the stress intensity factors to which different points in the concrete are subjected are different. Hence the microcracks induced by the external force are distributed discontinuously and randomly. Therefore in the present study the effect of the random nature of the microcracks in the fracture process zone of concrete is investigated using both fractal and probabilistic approach. The most probable fractal dimension of a network of micro cracks is obtained as a function of the branching angle ‘α’ of the microcracks, considered as a random variable. Further, an ensemble of cracks is synthetically generated using Monte Carlo technique imposing a constraint that the random deviations do not exceed the maximum size of the aggregate. Such tortuous cracks are analyzed by extending Fictitious Crack Model (FCM) proposed by Hillerborg et al [37]. A numerical study is carried out to examine the influence of certain important fracture parameters on the beam response of plain concrete beams. The contents of this thesis are organized in seven chapters with references at the end. Chapter-1 summarizes the historical development of fracture mechanics. A brief review of the basic concepts of fracture mechanics theory is presented. In chapter-2 a brief review of literature on fracture mechanics of concrete is presented. An overview of the analytical models, numerical models and fractal models till date has been presented in a systematic way. In chapter-3 the fracture processs zone has been modeled as a fractal following the work of Ji et al [118]. The contribution here has been to improve the work of Ji et al [118] (which considers the region of microcracks as a fractal tree) by considering the branching angle as a random variable. Mean fractal dimension thus obtained is found to match well with the experimental results available in the literature. In chapter-4 FCM, as proposed by Hillerborg et al [37] has been modified to be applicable to cracks with varying inclined faces by considering both horizontal and vertical components of the closing forces. The theoretical aspects of the modified FCM have been described in detail. The procedure for the determination of influence co- efficient matrices for a random tortuous crack in mode-I and mixed-mode along with a fractal crack has been explained. In the subsequent chapters the study has been taken up in two parts. In the first part only one generator of the fractal tree considered by Ji et al [118] has been analyzed by FCM to obtain load-deformation responses and fracture energy. In part two, a random tortuous crack, as already defined earlier has been analyzed both in mode-I and mixed mode using FCM. In chapter-5 plain concrete beams with one generator of fractal tree has been analyzed. The influence of the branching angle on the post-peak response of (P-δ) curves and fracture energy has been obtained. In chapter-6 a random tortuous crack has been analyzed in mode-I by FCM. The analysis reveals the influence of maximum aggregate size upon the pre and post-peak behaviour in support of the experimental findings. The nominal stress at peak is found to depend on the characteristic dimension of the structure thereby confirming the size effect. Further fracture energy values have been obtained by the work of fracture method and the results show good agreement with the results obtained in the literature. In chapter-7 a random tortuous crack has been analyzed in mixed mode by FCM. While modeling, symmetry has been assumed only to facilitate computational work though it is known that loss of symmetry affects the peak load. However analysis of the whole beam can be handled by the code developed in the thesis In chapter-8 a summary of the research work is presented along with a list of major observations and references at the end.
27

Analysis of Quality of Experience by applying Fuzzy logic : A study on response time

Ataeian, Seyed Mohsen, Darbandi, Mehrnaz Jaberi January 2011 (has links)
To be successful in today's competitive market, service providers should look at user's satisfaction as a critical key. In order to gain a better understanding of customers' expectations, a proper evaluations which considers intrinsic characteristics of perceived quality of service is needed. Due to the subjective nature of quality, the vagueness of human judgment and the uncertainty about the degree of users' linguistic satisfaction, fuzziness is associated with quality of experience. Considering the capability of Fuzzy logic in dealing with imprecision and qualitative knowledge, it would be wise to apply it as a powerful mathematical tool for analyzing the quality of experience (QoE). This thesis proposes a fuzzy procedure to evaluate the quality of experience. In our proposed methodology, we provide a fuzzy relationship between QoE and Quality of Service (QoS) parameters. To identify this fuzzy relationship a new term called Fuzzi ed Opinion Score (FOS) representing a fuzzy quality scale is introduced. A fuzzy data mining method is applied to construct the required number of fuzzy sets. Then, the appropriate membership functions describing fuzzy sets are modeled and compared with each other. The proposed methodology will assist service providers for better decision-making and resource management.
28

Development of headspace solid phase microextraction gas chromatography mass spectrometry method for analysis of volatile organic compounds in board samples : Correlation study between chromatographic data and flavor properties / Utveckling av fastfas mikroextraktion gaskromatografi masspektrometisk metod för analys av flyktiga organiska föreningar i kartongprover : Korrelationsstudie av kromatografisk data och smakegenskaper

Zethelius, Thea January 2021 (has links)
The purpose of this thesis work was to develop a headspace solid phase microextraction gas chromatography mass spectrometry (HS-SPME-GC-MS) method to detect volatile organic compounds (VOCs) in board samples and to statistically investigate potential correlation between chromatographic data and flavor data obtained from a trained panel. The developed method would hopefully serve as a complement to the already established routine analyses at Stora Enso and gain an increased understanding of which VOCs in the board influence its flavor properties. The impact of incubation time and adsorption time on the area under curve (AUC) was studied with a Design of Experiment screening using the software MODDE. The screening data showed a correlation between large AUC and low repeatability measured as relative standard deviation (RSD). The data was hard to fit to a model due to the large RSD values for the replicates, AUC for identified compounds as response gave an acceptable fit. The regression coefficients for the model showed that a longer adsorption time gave larger AUC, while incubation time had no significant impact on the response.  Instead of following up the screening with an optimization, the focus was shifted to improving the repeatability of the method, i.e. lowering the RSD. The high RSD was believed to mainly be the result of leakage of analytes and unstable temperature during adsorption, preventing the system from reaching equilibrium. Different heating options and capping options for the vial was tested. Septum in crimp cap ensured a gas tight seal for the vial, giving lower RSD values and larger AUC compared to the other alternatives, showing that there was indeed a leakage. Using oil bath ensured stable temperature during the adsorption and detection of a larger number of VOCs but created a temperature gradient in the vial due to it not being fully submerged in the oil. Oil bath gave larger AUC, but still high RSD due to the temperature gradient making the method sensitive to variance in fiber depth in the vial. The final method was performed with 2 g of board sample in a 20 ml headspace vial sealed with a crimp cap with septa. The incubation and adsorption were performed with the vial immersed in a 90-degree oil bath. 20 min incubation time was chosen based on the time it took to get a stable temperature gradient in the vial, and 20 minutes adsorption time was chosen as a good compromise between large AUC and low RSD. Compared to Stora Ensos routine analysis, the developed SPME method gave chromatograms with an improved signal-to-noise ratio for the base line and several more peaks with larger AUC. For the board sample used during method development, the SPME-method identified 34 VOCs, while the routine analysis only identified 12. The developed method was applied on 11 archived board samples of the same quality that were selected based on their original flavor properties, to get a large diversity of samples. Flavor analysis was performed by letting a trained flavor panel describe the flavor based on intensity and character of the water that had individually been in indirect contact with one of the 11 board sample for 24 h. Potential correlation between chromatographic data obtained with the developed method and the flavor experience described by the flavor panelists was statistically investigated with the multivariate analysis software SIMCA. The correlation study showed that a combination of 12 VOCs with short retention time are most likely the main source of off-flavor which of 5 could only be identified with the developed SPME method. VOCs with long retention time did not contribute to an off-flavor and might have a masking effect on flavor given by other VOCS, however not confirmed in this study. Furthermore, the age of the board samples proved to be a good indicator for prediction of the flavor intensity, whereas the total AUC of the samples was not. Possible correlation between detected VOCs in the samples and flavor character given by the flavor panel were seen, however the variation in the data and the sample set were too small, preventing from making conclusions on individual VOCs impact on the flavor experience. The developed HS-SPME-GC-MS method would serve as a complement to the already established routine analyses at Stora Enso and has slightly increased the understanding of which VOCs in the board influence the flavor properties
29

An evolutionary Pentagon Support Vector finder method

Mousavi, S.M.H., Vincent, Charles, Gherman, T. 02 March 2020 (has links)
Yes / In dealing with big data, we need effective algorithms; effectiveness that depends, among others, on the ability to remove outliers from the data set, especially when dealing with classification problems. To this aim, support vector finder algorithms have been created to save just the most important data in the data pool. Nevertheless, existing classification algorithms, such as Fuzzy C-Means (FCM), suffer from the drawback of setting the initial cluster centers imprecisely. In this paper, we avoid existing shortcomings and aim to find and remove unnecessary data in order to speed up the final classification task without losing vital samples and without harming final accuracy; in this sense, we present a unique approach for finding support vectors, named evolutionary Pentagon Support Vector (PSV) finder method. The originality of the current research lies in using geometrical computations and evolutionary algorithms to make a more effective system, which has the advantage of higher accuracy on some data sets. The proposed method is subsequently tested with seven benchmark data sets and the results are compared to those obtained from performing classification on the original data (classification before and after PSV) under the same conditions. The testing returned promising results.
30

Adaptive Radio Resource Management in Cognitive Radio Communications using Fuzzy Reasoning

Shatila, Hazem Sarwat 23 April 2012 (has links)
As wireless technologies evolve, novel innovations and concepts are required to dynamically and automatically alter various radio parameters in accordance with the radio environment. These innovations open the door for cognitive radio (CR), a new concept in telecommunications. CR makes its decisions using an inference engine, which can learn and adapt to changes in radio conditions. Fuzzy logic (FL) is the proposed decision-making algorithm for controlling the CR's inference engine. Fuzzy logic is well-suited for vague environments in which incomplete and heterogeneous information is present. In our proposed approach, FL is used to alter various radio parameters according to experience gained from different environmental conditions. FL requires a set of decision-making rules, which can vary according to radio conditions, but anomalies rise among these rules, causing degradation in the CR's performance. In such cases, the CR requires a method for eliminating such anomalies. In our model, we used a method based on the Dempster-Shafer (DS) theory of belief to accomplish this task. Through extensive simulation results and vast case studies, the use of the DS theory indeed improved the CR's decision-making capability. Using FL and the DS theory of belief is considered a vital module in the automation of various radio parameters for coping with the dynamic wireless environment. To demonstrate the FL inference engine, we propose a CR version of WiMAX, which we call CogMAX, to control different radio resources. Some of the physical parameters that can be altered for better results and performance are the physical layer parameters such as channel estimation technique, the number of subcarriers used for channel estimation, the modulation technique, and the code rate. / Ph. D.

Page generated in 0.0293 seconds