• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 215
  • 45
  • 27
  • 26
  • 24
  • 21
  • 16
  • 15
  • 12
  • 7
  • 6
  • 4
  • 3
  • 3
  • 2
  • Tagged with
  • 458
  • 71
  • 56
  • 55
  • 47
  • 40
  • 39
  • 37
  • 31
  • 31
  • 30
  • 30
  • 29
  • 25
  • 24
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

A Comparative Study on Optimization Algorithms and its efficiency

Ahmed Sheik, Kareem January 2022 (has links)
Background: In computer science, optimization can be defined as finding the most cost-effective or notable achievable performance under certain circumstances, maximizing desired factors, and minimizing undesirable results. Many problems in the real world are continuous, and it isn't easy to find global solutions. However, computer technological development increases the speed of computations [1]. The optimization method, an efficient numerical simulator, and a realistic depiction of physical operations that we intend to describe and optimize for any optimization issue are all interconnected components of the optimization process [2]. Objectives: A literature review on existing optimization algorithms is performed. Ten different benchmark functions are considered and are implemented on the existing chosen algorithms like GA (Genetic Algorithm), ACO (Ant ColonyOptimization) Method, and Plant Intelligence Behaviour optimization algorithm to measure the efficiency of these approaches based on the factors or metrics like CPU Time, Optimality, Accuracy, and Mean Best Standard Deviation. Methods: In this research work, a mixed-method approach is used. A literature review is performed based on the existing optimization algorithms. On the other hand, an experiment is conducted by using ten different benchmark functions with the current optimization algorithms like PSO algorithm, ACO algorithm, GA, and PIBO to measure their efficiency based on the four different factors like CPU Time, Optimality, Accuracy, Mean Best Standard Deviation. This tells us which optimization algorithms perform better. Results: The experiment findings are represented within this section. Using the standard functions on the suggested method and other methods, the various metrics like CPU Time, Optimality, Accuracy, and Mean Best Standard Deviation are considered, and the results are tabulated. Graphs are made using the data obtained. Analysis and Discussion: The research questions are addressed based on the experiment's results that have been conducted. Conclusion: We finally conclude the research by analyzing the existing optimization methods and the algorithms' performance. The PIBO performs much better and can be depicted from the results of the optimal metrics, best mean, standard deviation, and accuracy, and has a significant drawback of CPU Time where its time taken is much higher when compared to the PSO algorithm and almost close to GA and performs much better than ACO algorithm.
162

MulTe: A Multi-Tenancy Database Benchmark Framework

Kiefer, Tim, Schlegel, Benjamin, Lehner, Wolfgang 26 January 2023 (has links)
Multi-tenancy in relational databases has been a topic of interest for a couple of years. On the one hand, ever increasing capabilities and capacities of modern hardware easily allow for multiple database applications to share one system. On the other hand, cloud computing leads to outsourcing of many applications to service architectures, which in turn leads to offerings for relational databases in the cloud, as well. The ability to benchmark multi-tenancy database systems (MT-DBMSs) is imperative to evaluate and compare systems and helps to reveal otherwise unnoticed shortcomings. With several tenants sharing a MT-DBMS, a benchmark is considerably different compared to classic database benchmarks and calls for new benchmarking methods and performance metrics. Unfortunately, there is no single, well-accepted multi-tenancy benchmark for MT-DBMSs available and few efforts have been made regarding the methodology and general tooling of the process. We propose a method to benchmark MT-DBMSs and provide a framework for building such benchmarks. To support the cumbersome process of defining and generating tenants, loading and querying their data, and analyzing the results we propose and provide MULTE, an open-source framework that helps with all these steps.
163

Modelling and simulation of membrane bioreactors for wastewater treatment

Janus, Tomasz January 2013 (has links)
The work presented in this thesis leads to the formulation of a dynamic mathematical model of an immersed membrane bioreactor (iMBR) for wastewater treatment. This thesis is organised into three parts, each one describing a different set of tasks associated with model development and simulation. In the first part, the Author qualitatively and quantitatively compares various published activated sludge models, i.e. models of biochemical processes associated with bacterial growth, decay, lysis and substrate utilisation in activated sludge systems. As the thesis is focused on modelling membrane bioreactors (MBRs) which are known to experience membrane fouling as a result of adsorption of biopolymers present in the bulk liquid onto and within the membrane, all activated sludge models considered in this thesis are able to predict, with various levels of accuracy, the concentrations of biopolymeric substances, namely soluble microbial products (SMP) and extracellular polymeric substances (EPS). Some of the published activated sludge models dedicated to modelling SMP and EPS kinetics in MBR systems were unable to predict the SMP and EPS concentrations with adequate levels of accuracy, without compromising the predictions of other sludge and wastewater constituents. In other cases, the model equations and the assumptions made by their authors were questionable. Hence, two new activated sludge models with SMP and EPS as additional components have been formulated, described, and simulated. The first model is based on the Activated Sludge Model No. 1 (ASM1) whereas the second model is based on the Activated Sludge Model No. 3 (ASM3). Both models are calibrated on two sets of data obtained from a laboratory-scale system and a full-scale system and prove to be in very good agreement with the measurements. The second part of this thesis explains the development of two membrane fouling models. These models are set to describe the loss of membrane permeability during filtration of various solutions and suspensions. The main emphasis is placed on filtration of activated sludge mixtures, however the models are designed to be as general as feasibly possible. As fouling is found to be caused by a large number of often very complex processes which occur at different spatial as well as temporal scales, the two fouling models developed here have to consider a number of significant simplifications and assumptions. These simplifications are required to balance the model's accuracy, generality and completeness with its usability in terms of execution times, identifiability of parameters and ease of implementation in general purpose simulators. These requirements are necessary to ascertain that long term simulations as well as optimisation and sensitivity studies performed in this thesis either individually on fouling models or on the complete model of a MBR can be carried out within realistic time-scales. The first fouling model is based on an idea that fouling can be subdivided into just two processes: short-term reversible fouling and long-term irreversible fouling. These two processes are described with two first order ordinary differential equations (ODEs). Whilst the first model characterises the membrane filtration process from an observer's input-output point of view without any rigorous deterministic description of the underlying mechanisms of membrane fouling, the second model provides a more theoretical and in-depth description of membrane fouling by incorporating and combining three classical macroscopic mechanistic fouling equations within a single simulation framework. Both models are calibrated on a number of experimental data and show good levels of accuracy for their designated applications and within the intended ranges of operating conditions. In the third part, the first developed biological model (CES-ASM1) is combined with the behavioural fouling model and the links between these two models are formulated to allow complete simulation of a hollow fibre (HF) immersed membrane bioreactor (iMBR). It is assumed that biological processes affect the membrane through production of mixed liquor suspended solids (MLSS), SMP and EPS which cause pore blockage, cake formation, pore diameter constriction, and affect the specific cake resistance (SCR). The membrane, on the other hand, has a direct effect on the bulk liquid SMP concentration due to its SMP rejection properties. SMP are assumed to be solely responsible for irreversible fouling, MLSS is directly linked to the amount of cake depositing on the membrane surface, whereas EPS content in activated sludge affects the cake's SCR. Other links provided in the integrated MBR model include the effects of air scouring on the rate of particle back-transport from the membrane surface and the effects of MLSS concentration on oxygen mass transfer. Although backwashing is not described in great detail, its effects are represented in the model by resetting the initial condition in the cake deposition equation after each backwash period. The MBR model was implemented in Simulink® using the plant layout adopted in the MBR benchmark model of Maere et al. [160]. The model was then simulated with the inputs and operational parameters defined in [36, 160]. The results were compared against the MBR benchmark model of Maere et al. [160] which, contrary to this work, does not take into account the production of biopolymers, the membrane fouling, nor any interactions between the biological and the membrane parts of an MBR system.
164

The scheduling of manufacturing systems using Artificial Intelligence (AI) techniques in order to find optimal/near-optimal solutions

Maqsood, Shahid January 2012 (has links)
This thesis aims to review and analyze the scheduling problem in general and Job Shop Scheduling Problem (JSSP) in particular and the solution techniques applied to these problems. The JSSP is the most general and popular hard combinational optimization problem in manufacturing systems. For the past sixty years, an enormous amount of research has been carried out to solve these problems. The literature review showed the inherent shortcomings of solutions to scheduling problems. This has directed researchers to develop hybrid approaches, as no single technique for scheduling has yet been successful in providing optimal solutions to these difficult problems, with much potential for improvements in the existing techniques. The hybrid approach complements and compensates for the limitations of each individual solution technique for better performance and improves results in solving both static and dynamic production scheduling environments. Over the past years, hybrid approaches have generally outperformed simple Genetic Algorithms (GAs). Therefore, two novel priority heuristic rules are developed: Index Based Heuristic and Hybrid Heuristic. These rules are applied to benchmark JSSP and compared with popular traditional rules. The results show that these new heuristic rules have outperformed the traditional heuristic rules over a wide range of benchmark JSSPs. Furthermore, a hybrid GA is developed as an alternate scheduling approach. The hybrid GA uses the novel heuristic rules in its key steps. The hybrid GA is applied to benchmark JSSPs. The hybrid GA is also tested on benchmark flow shop scheduling problems and industrial case studies. The hybrid GA successfully found solutions to JSSPs and is not problem dependent. The hybrid GA performance across the case studies has proved that the developed scheduling model can be applied to any real-world scheduling problem for achieving optimal or near-optimal solutions. This shows the effectiveness of the hybrid GA in real-world scheduling problems. In conclusion, all the research objectives are achieved. Finaly, the future work for the developed heuristic rules and the hybrid GA are discussed and recommendations are made on the basis of the results.
165

Investigation of energy performance and climate change adaptation strategies of hotels in Greece

Farrou, Ifigenia January 2013 (has links)
There is evidence that hotels are the highest energy use buildings of the tertiary sector in Europe and internationally because of their operational characteristics and the large number of users. Therefore, there is potential for significant energy savings. This study investigated the energy performance of the hotel sector in Greece and proposes a methodology for their energy classification and climate change mitigation strategies for an optimum building envelope design for a typical hotel building operated all year or seasonally. This was achieved by collecting operational energy data for 90 Greek hotels and analyzing them using the k-means algorithm. Then a typical hotel building was modelled using TRNSYS and climate change weather files to assess the impact on its energy demand and to propose climate change mitigation strategies. The assessment was performed via hourly simulations with real climatic data for the past and generated future data for the years 2020, 2050 and 2080. The analysis of the energy data (based on utilities supply) of 90 hotels shows average consumption approx 290 kWh/m2/year for hotels with annual operation and 200 kWh/m2/year for hotels with seasonal operation. Furthermore, the hotels were classified in well separated clusters in terms of their electricity and oil consumption. The classification showed that each cluster has high average energy consumption compared to other buildings in Greece. Cooling energy demand of the typical building increased by 33% and heating energy demand decreased by 22% in 2010 compared to 1970. Cooling load is expected to rise by 15% in year 2020, 34% in year 2050 and 63% in year 2080 compared to year 1970. Heating load is expected to decrease by 14% in year 2020, 29% in year 2050 and 46% in year 2080. It was found that different strategies can be applied to all year and seasonally operated buildings for the most energy efficient performance. These include: a. For all year operated buildings: insulation, double low e glazing, intelligently controlled night and day ventilation, ceiling fans and shading. The building of year 2050 would need more shading and the building of year 2080 would need additional shading and cool materials. b. For seasonally operated buildings: Intelligently controlled night and day ventilation, cool materials, ceiling fans, shading and double low e glazing. Only the building of year 2080 would need insulation. This study makes a contribution to understanding the impact of the climate change on the energy demand of hotel buildings and proposes mitigation strategies that focus on the building envelope in different periods and climatic zones of Greece.
166

Comparative study of casting simulation softwares for future use during early stages of product development

Navarro Aranda, Monica January 2015 (has links)
Within industrial product development processes there is an increasing demand towards reliable predictions of the material behavior, which aims to promote a property driven development that can reduce the lead times. The implementation of simulation based product development with integrated casting simulation may enable the design engineers to gain an early understanding of the products with relation to castability, and orient the subsequent design refinement so as to achieve the desired mechanical properties. This work investigates the suitability of three commercial casting simulation softwares –MAGMA 5.2, NovaFlow & Solid 4.7.5 (NFS) and Click2Cast 3.0 (C2C)–, with respect to the needs of design engineers, such as prediction of shrinkage porosity and mechanical properties with relation to the design. Simplified solidification simulations suitable for this stage were thus performed for three high pressure die cast components with different geometrical constraints. The comparability between the solidification and cooling behaviour predicted by the three softwares was studied, and showed that a reasonably good agreement between predicted solidification times by MAGMA and NFS could be obtained, albeit not between predictions by MAGMA and C2C. Predictions by the three softwares of the hot spot/porosity areas showed to have a good agreement. The calculation times by each software were compared, and MAGMA was seen to have the best performance, yielding significantly shorter times than NFS and C2C. The results obtained were also compared to experimental investigations of porosity, microstructural coarseness, and mechanical properties. There was a good agreement between the predicted hot spot areas –i.e. areas in the geometry that solidify last– and the findings of porosities in the actual castings, meaning that solidification simulations might be able to provide important information for the prediction of most of shrinkage related porosity locations that are related to the casting geometry. However, the lack of a detailed knowledge at the design stage of the casting process limits the possibilities to predict all porosities. The predicted microstructure and mechanical properties by MAGMA non-ferrous were seen to have a good agreement in trend with the experimental data, albeit the predicted values showed large differences in magnitude with the experimental data. Although, the MAGMA non-ferrous module was not developed for HPDC components, it was interesting to study if it could be applied in this context. However, the models seem to need adoption to the HPDC process and alloys. In conclusion, with a limited knowledge of the manufacturing parameters, simplified solidification simulations may still be able to provide reasonably reliable and useful information during early development stages in order to optimise the design of castings.
167

COMMANDE NON LINEAIRE SANS CAPTEUR DE LA MACHINE ASYNCHRONE

Traore, Dramane 19 November 2008 (has links) (PDF)
Cette thèse a pour but de proposer des lois de commande sans capteur mécanique de la machine asynchrone. Chaque loi de commande élaborée a été validée expérimentalement sur un benchmark industriel, prenant en compte les problèmes de la machine asynchrone à très basse vitesse. L'étude de l'observabilité montre que la machine asynchrone est inobservable à très basse vitesse lorsque la mesure de la vitesse n'est disponible. La synthèse d'observateurs pour la machine asynchrone sans capteur mécanique a été une des contributions principales de nos travaux. Dans un premier temps, un observateur interconnecté à grand gain a été conçu pour reconstruire les variables mécaniques (vitesse, couple de charge) et les variables magnétiques (flux). Dans un second temps un observateur adaptatif interconnecté a été synthétisé pour estimer en plus des variables mécaniques et magnétiques, la résistance statorique paramètre crucial à très basse vitesse. Les résultats sur le benchmark "Observateur sans capteur mécanique" ont montré une amélioration sensible des performances robustes. La conception de commandes non linéaires sans capteur mécanique pour la machine asynchrone constitue la contribution majeure de nos travaux en démontrant la stabilité globale de l'ensemble "Commande+Observateur" et avec validation expérimentale sur le benchmark "Commande sans capteur mécanique". Plusieurs lois de commande ont été conçues et comparées : type PI avec termes non linéaires, type modes glissants d'ordre un, puis d'ordre supérieur et enfin une loi de type backstepping. Ces deux dernières ont de bonnes performances tant en basse vitesse qu'en haute vitesse. Les résultats ont été comparés positivement à ceux d'un variateur industriel sur le benchmark "Commande sans capteur mécanique" : dans la zone inobservable, il est instable contrairement aux lois de commande que nous avons conçues
168

Automatické generování umělých XML dokumentů / Automatic Generation of Synthetic XML Documents

Betík, Roman January 2015 (has links)
The aim of this thesis is to research the current possibilities and limitations of automatic generation of synthetic XML and JSON documents used in the area of Big Data. The first part of the work discusses the properties of the most used XML data generators, Big Data and JSON generators and compares them. The next part of the thesis proposes an algorithm for data generation of semistructured data. The main focus of the algorithm is on the parallel execution of the generation process while preserving the ability to control the contents of the generated documents. The data generator can also use samples of real data in the generation of the synthetic data and is also capable of automatic creation of simple references between JSON documents. The last part of the thesis provides the results of experiments with the data generator exploited for the purpose of testing database MongoDB, describes its added value and compares it to other solutions. Powered by TCPDF (www.tcpdf.org)
169

Deriving Consensus Rankings from Benchmarking Experiments

Hornik, Kurt, Meyer, David January 2006 (has links) (PDF)
Whereas benchmarking experiments are very frequently used to investigate the performance of statistical or machine learning algorithms for supervised and unsupervised learning tasks, overall analyses of such experiments are typically only carried out on a heuristic basis, if at all. We suggest to determine winners, and more generally, to derive a consensus ranking of the algorithms, as the linear order on the algorithms which minimizes average symmetric distance (Kemeny-Snell distance) to the performance relations on the individual benchmark data sets. This leads to binary programming problems which can typically be solved reasonably efficiently. We apply the approach to a medium-scale benchmarking experiment to assess the performance of Support Vector Machines in regression and classification problems, and compare the obtained consensus ranking with rankings obtained by simple scoring and Bradley-Terry modeling. / Series: Research Report Series / Department of Statistics and Mathematics
170

An investigation of water usage in casual dining restaurants in Kansas

VanSchenkhof, Matthew January 1900 (has links)
Doctor of Philosophy / Department of Hospitality Management and Dietetics / Elizabeth Barrett / Water is essential for many aspects of daily life including restaurant operations and is necessary for generation and service of properly produced, safe food. However, water is becoming more scarce and expensive due to climate change, infrastructure needs, governmental budget constraints, and shifting water sources. The purpose of this study was to develop benchmarks for water usage and costs for casual dining restaurants (CDRs) in Kansas and identify demographics that may impact water usage and costs. The population for the study was the 952 CDRs in Kansas. Stratified random sampling selected 60 restaurants from five Kansas demographic regions. Data were collected from the local municipal water utilities, Kansas Department of Revenue, Google’s Place Page, and through telephone or on-site interviews with a manager. Results for 221 of 300 (74%) CDRs that responded indicated that on average 1,766 gallons of water were used each day per restaurant, 12.79 per gallons per day for each seat, 68 gallons per employee, and 0.73 gallons per interior square foot. These results were as much as 69% lower than those from a 2000 study conducted by Dziegielewski et al. Significant demographics that impacted water consumption were season of year, population (F= 9.763, p≤.001), menu (F= 2.921, p≤.035), type of ownership (F= 56.565, p≤.000), water source (F= 10.751, p≤.032), irrigation (F= 46.514, p≤.001) and days open (F= 6.085, p≤.000). A stepwise linear regression model (F= 33.676, p≤.000) found ownership (β= -.329, p ≤ 0.000), irrigation (β= -.290, p ≤ 0.000), and population (β= -.176, p ≤ 0.003) impacted water consumption. For water costs, CDRs paid an average of $6.54 per 1,000 gallons of water consumed and had mean annual expenses of $5,026 on revenues of $2,554,254 which was the equivalent of a water cost percent of 0.42. Demographics that impacted water costs were season of year, region (F = 3.167, p≤ 0.015), and water source (F = 4.692, p≤ 0.032). However, a stepwise linear regression model (F= 4.485, p ≤ 0.036) found only water source (β= -.152, p ≤ 0.036) was an indicator of the percentage of revenues related to cost of water. This study did identify benchmarks for water consumption and water costs that can be used in the future by restaurateurs. The primary limitations of the study were that results can only be generalized to casual dining restaurants in Kansas. Future studies can be conducted with different types of restaurants in Kansas and with CDRs in other areas.

Page generated in 0.486 seconds