Spelling suggestions: "subject:"benchmark."" "subject:"enchmark.""
161 |
Optimering av enzym-baserad immunohistokemisk metod i jämförelse mot immunofluorescens med fryssnittade hudbiopsier / Optimization of enzyme-based immunohistochemical method in comparison with immunofluorescence with frozen-cut skin biopsiesJohansson, Karin January 2022 (has links)
Med hjälp av immunohistokemi kan antigen och antikroppar som är bundna till vävnaden detekteras. Autoimmuna hudsjukdomar är exempel på sjukdomar som diagnosticeras med immunohistokemi. På Falu lasarett användes immunofluorescens för diagnostik av autoimmuna hudsjukdomar. Syftet med denna studie var att optimera enzym-baserad immunohistokemi för epitoperna IgA, IgG, IgM och C3 och jämföra med immunofluorescens vad gäller specificitet, signalstyrka och upplösning. Vävnader som analyserades var tonsill, lever, tarmslemhinna och hudbiopsier. Fixering gjordes i 4% formaldehyd av vävnaderna som infärgades med ultraView DAB och ultraView DAB med FITC. Vävnad som infärgades med DIF med FITC sköljdes enbart i Reaction Buffer. Vävnaderna färgades enligt protokollen ultraView DAB, ultraView DAB med FITC och DIF med FITC. En manuell infärgning med aktiverat DAB utfördes. Resultatet visade bakgrundsinfärgning för samtliga infärgningar. DIF med FITC var tydligare infärgad och lättare att skilja mellan specifik och ospecifik infärgning. Det är svårt att optimera enzym-baserad IHC och epitopen IgA hade generellt starkare infärgning jämfört med epitopen IgM. För att erhålla tillförlitliga resultat krävs det att flera vävnadsprover analyseras. / Using immunohistochemistry, antigens and antibodies bound to the tissue can be detected. Autoimmune skin diseases are examples of diseases that are diagnosed with immunohistochemistry. At Falu Hospital, immunofluorescence was used to diagnose autoimmune skin diseases. The aim of this study was to optimize enzyme-based immunohistochemistry for the epitopes IgA, IgG, IgM and C3 and to compare with immunofluorescence in terms of specificity, signal strength and resolution. Tissues analyzed were tonsil, liver, intestinal mucosa and skin biopsies. Fixation was done in 4% formaldehyde of the tissues stained with ultraView DAB and ultraView DAB with FITC. Tissue stained with DIF with FITC was rinsed in Reaction Buffer only. A manual staining with activated DAB was performed. The result showed background staining for all stainings. DIF with FITC was more clearly stained and easier to distinguish between specific and nonspecific staining. It is difficult to optimize enzyme-based IHC and the epitope IgA generally had stronger staining compared to the epitope IgM. To obtain reliable results, several tissue samples must be analyzed.
|
162 |
An Instance Data Repository for the Round-robin Sports Timetabling ProblemVan Bulck, David, Goossens, Dries, Schönberger, Jörn, Guajardo, Mario 11 August 2020 (has links)
The sports timetabling problem is a combinatorial optimization problem that consists of creating a timetable that defines against whom, when and where teams play games. This is a complex matter, since real-life sports timetabling applications are typically highly constrained. The vast amount and variety of constraints and the lack of generally accepted benchmark problem instances make that timetable algorithms proposed in the literature are often tested on just one or two specific seasons of the competition under consideration. This is problematic since only a few algorithmic insights are gained. To mitigate this issue, this article provides a problem instance repository containing over 40 different types of instances covering artificial and real-life problem instances. The construction of such a repository is not trivial, since there are dozens of constraints that need to be expressed in a standardized format. For this, our repository relies on RobinX, an XML-supported classification framework. The resulting repository provides a (non-exhaustive) overview of most real-life sports timetabling applications published over the last five decades. For every problem, a short description highlights the most distinguishing characteristics of the problem. The repository is publicly available and will be continuously updated as new instances or better solutions become available.
|
163 |
A Comparative Study on Optimization Algorithms and its efficiencyAhmed Sheik, Kareem January 2022 (has links)
Background: In computer science, optimization can be defined as finding the most cost-effective or notable achievable performance under certain circumstances, maximizing desired factors, and minimizing undesirable results. Many problems in the real world are continuous, and it isn't easy to find global solutions. However, computer technological development increases the speed of computations [1]. The optimization method, an efficient numerical simulator, and a realistic depiction of physical operations that we intend to describe and optimize for any optimization issue are all interconnected components of the optimization process [2]. Objectives: A literature review on existing optimization algorithms is performed. Ten different benchmark functions are considered and are implemented on the existing chosen algorithms like GA (Genetic Algorithm), ACO (Ant ColonyOptimization) Method, and Plant Intelligence Behaviour optimization algorithm to measure the efficiency of these approaches based on the factors or metrics like CPU Time, Optimality, Accuracy, and Mean Best Standard Deviation. Methods: In this research work, a mixed-method approach is used. A literature review is performed based on the existing optimization algorithms. On the other hand, an experiment is conducted by using ten different benchmark functions with the current optimization algorithms like PSO algorithm, ACO algorithm, GA, and PIBO to measure their efficiency based on the four different factors like CPU Time, Optimality, Accuracy, Mean Best Standard Deviation. This tells us which optimization algorithms perform better. Results: The experiment findings are represented within this section. Using the standard functions on the suggested method and other methods, the various metrics like CPU Time, Optimality, Accuracy, and Mean Best Standard Deviation are considered, and the results are tabulated. Graphs are made using the data obtained. Analysis and Discussion: The research questions are addressed based on the experiment's results that have been conducted. Conclusion: We finally conclude the research by analyzing the existing optimization methods and the algorithms' performance. The PIBO performs much better and can be depicted from the results of the optimal metrics, best mean, standard deviation, and accuracy, and has a significant drawback of CPU Time where its time taken is much higher when compared to the PSO algorithm and almost close to GA and performs much better than ACO algorithm.
|
164 |
MulTe: A Multi-Tenancy Database Benchmark FrameworkKiefer, Tim, Schlegel, Benjamin, Lehner, Wolfgang 26 January 2023 (has links)
Multi-tenancy in relational databases has been a topic of interest for a couple of years. On the one hand, ever increasing capabilities and capacities of modern hardware easily allow for multiple database applications to share one system. On the other hand, cloud computing leads to outsourcing of many applications to service architectures, which in turn leads to offerings for relational databases in the cloud, as well. The ability to benchmark multi-tenancy database systems (MT-DBMSs) is imperative to evaluate and compare systems and helps to reveal otherwise unnoticed shortcomings. With several tenants sharing a MT-DBMS, a benchmark is considerably different compared to classic database benchmarks and calls for new benchmarking methods and performance metrics. Unfortunately, there is no single, well-accepted multi-tenancy benchmark for MT-DBMSs available and few efforts have been made regarding the methodology and general tooling of the process. We propose a method to benchmark MT-DBMSs and provide a framework for building such benchmarks. To support the cumbersome process of defining and generating tenants, loading and querying their data, and analyzing the results we propose and provide MULTE, an open-source framework that helps with all these steps.
|
165 |
Modelling and simulation of membrane bioreactors for wastewater treatmentJanus, Tomasz January 2013 (has links)
The work presented in this thesis leads to the formulation of a dynamic mathematical model of an immersed membrane bioreactor (iMBR) for wastewater treatment. This thesis is organised into three parts, each one describing a different set of tasks associated with model development and simulation. In the first part, the Author qualitatively and quantitatively compares various published activated sludge models, i.e. models of biochemical processes associated with bacterial growth, decay, lysis and substrate utilisation in activated sludge systems. As the thesis is focused on modelling membrane bioreactors (MBRs) which are known to experience membrane fouling as a result of adsorption of biopolymers present in the bulk liquid onto and within the membrane, all activated sludge models considered in this thesis are able to predict, with various levels of accuracy, the concentrations of biopolymeric substances, namely soluble microbial products (SMP) and extracellular polymeric substances (EPS). Some of the published activated sludge models dedicated to modelling SMP and EPS kinetics in MBR systems were unable to predict the SMP and EPS concentrations with adequate levels of accuracy, without compromising the predictions of other sludge and wastewater constituents. In other cases, the model equations and the assumptions made by their authors were questionable. Hence, two new activated sludge models with SMP and EPS as additional components have been formulated, described, and simulated. The first model is based on the Activated Sludge Model No. 1 (ASM1) whereas the second model is based on the Activated Sludge Model No. 3 (ASM3). Both models are calibrated on two sets of data obtained from a laboratory-scale system and a full-scale system and prove to be in very good agreement with the measurements. The second part of this thesis explains the development of two membrane fouling models. These models are set to describe the loss of membrane permeability during filtration of various solutions and suspensions. The main emphasis is placed on filtration of activated sludge mixtures, however the models are designed to be as general as feasibly possible. As fouling is found to be caused by a large number of often very complex processes which occur at different spatial as well as temporal scales, the two fouling models developed here have to consider a number of significant simplifications and assumptions. These simplifications are required to balance the model's accuracy, generality and completeness with its usability in terms of execution times, identifiability of parameters and ease of implementation in general purpose simulators. These requirements are necessary to ascertain that long term simulations as well as optimisation and sensitivity studies performed in this thesis either individually on fouling models or on the complete model of a MBR can be carried out within realistic time-scales. The first fouling model is based on an idea that fouling can be subdivided into just two processes: short-term reversible fouling and long-term irreversible fouling. These two processes are described with two first order ordinary differential equations (ODEs). Whilst the first model characterises the membrane filtration process from an observer's input-output point of view without any rigorous deterministic description of the underlying mechanisms of membrane fouling, the second model provides a more theoretical and in-depth description of membrane fouling by incorporating and combining three classical macroscopic mechanistic fouling equations within a single simulation framework. Both models are calibrated on a number of experimental data and show good levels of accuracy for their designated applications and within the intended ranges of operating conditions. In the third part, the first developed biological model (CES-ASM1) is combined with the behavioural fouling model and the links between these two models are formulated to allow complete simulation of a hollow fibre (HF) immersed membrane bioreactor (iMBR). It is assumed that biological processes affect the membrane through production of mixed liquor suspended solids (MLSS), SMP and EPS which cause pore blockage, cake formation, pore diameter constriction, and affect the specific cake resistance (SCR). The membrane, on the other hand, has a direct effect on the bulk liquid SMP concentration due to its SMP rejection properties. SMP are assumed to be solely responsible for irreversible fouling, MLSS is directly linked to the amount of cake depositing on the membrane surface, whereas EPS content in activated sludge affects the cake's SCR. Other links provided in the integrated MBR model include the effects of air scouring on the rate of particle back-transport from the membrane surface and the effects of MLSS concentration on oxygen mass transfer. Although backwashing is not described in great detail, its effects are represented in the model by resetting the initial condition in the cake deposition equation after each backwash period. The MBR model was implemented in Simulink® using the plant layout adopted in the MBR benchmark model of Maere et al. [160]. The model was then simulated with the inputs and operational parameters defined in [36, 160]. The results were compared against the MBR benchmark model of Maere et al. [160] which, contrary to this work, does not take into account the production of biopolymers, the membrane fouling, nor any interactions between the biological and the membrane parts of an MBR system.
|
166 |
The scheduling of manufacturing systems using Artificial Intelligence (AI) techniques in order to find optimal/near-optimal solutionsMaqsood, Shahid January 2012 (has links)
This thesis aims to review and analyze the scheduling problem in general and Job Shop Scheduling Problem (JSSP) in particular and the solution techniques applied to these problems. The JSSP is the most general and popular hard combinational optimization problem in manufacturing systems. For the past sixty years, an enormous amount of research has been carried out to solve these problems. The literature review showed the inherent shortcomings of solutions to scheduling problems. This has directed researchers to develop hybrid approaches, as no single technique for scheduling has yet been successful in providing optimal solutions to these difficult problems, with much potential for improvements in the existing techniques. The hybrid approach complements and compensates for the limitations of each individual solution technique for better performance and improves results in solving both static and dynamic production scheduling environments. Over the past years, hybrid approaches have generally outperformed simple Genetic Algorithms (GAs). Therefore, two novel priority heuristic rules are developed: Index Based Heuristic and Hybrid Heuristic. These rules are applied to benchmark JSSP and compared with popular traditional rules. The results show that these new heuristic rules have outperformed the traditional heuristic rules over a wide range of benchmark JSSPs. Furthermore, a hybrid GA is developed as an alternate scheduling approach. The hybrid GA uses the novel heuristic rules in its key steps. The hybrid GA is applied to benchmark JSSPs. The hybrid GA is also tested on benchmark flow shop scheduling problems and industrial case studies. The hybrid GA successfully found solutions to JSSPs and is not problem dependent. The hybrid GA performance across the case studies has proved that the developed scheduling model can be applied to any real-world scheduling problem for achieving optimal or near-optimal solutions. This shows the effectiveness of the hybrid GA in real-world scheduling problems. In conclusion, all the research objectives are achieved. Finaly, the future work for the developed heuristic rules and the hybrid GA are discussed and recommendations are made on the basis of the results.
|
167 |
Investigation of energy performance and climate change adaptation strategies of hotels in GreeceFarrou, Ifigenia January 2013 (has links)
There is evidence that hotels are the highest energy use buildings of the tertiary sector in Europe and internationally because of their operational characteristics and the large number of users. Therefore, there is potential for significant energy savings. This study investigated the energy performance of the hotel sector in Greece and proposes a methodology for their energy classification and climate change mitigation strategies for an optimum building envelope design for a typical hotel building operated all year or seasonally. This was achieved by collecting operational energy data for 90 Greek hotels and analyzing them using the k-means algorithm. Then a typical hotel building was modelled using TRNSYS and climate change weather files to assess the impact on its energy demand and to propose climate change mitigation strategies. The assessment was performed via hourly simulations with real climatic data for the past and generated future data for the years 2020, 2050 and 2080. The analysis of the energy data (based on utilities supply) of 90 hotels shows average consumption approx 290 kWh/m2/year for hotels with annual operation and 200 kWh/m2/year for hotels with seasonal operation. Furthermore, the hotels were classified in well separated clusters in terms of their electricity and oil consumption. The classification showed that each cluster has high average energy consumption compared to other buildings in Greece. Cooling energy demand of the typical building increased by 33% and heating energy demand decreased by 22% in 2010 compared to 1970. Cooling load is expected to rise by 15% in year 2020, 34% in year 2050 and 63% in year 2080 compared to year 1970. Heating load is expected to decrease by 14% in year 2020, 29% in year 2050 and 46% in year 2080. It was found that different strategies can be applied to all year and seasonally operated buildings for the most energy efficient performance. These include: a. For all year operated buildings: insulation, double low e glazing, intelligently controlled night and day ventilation, ceiling fans and shading. The building of year 2050 would need more shading and the building of year 2080 would need additional shading and cool materials. b. For seasonally operated buildings: Intelligently controlled night and day ventilation, cool materials, ceiling fans, shading and double low e glazing. Only the building of year 2080 would need insulation. This study makes a contribution to understanding the impact of the climate change on the energy demand of hotel buildings and proposes mitigation strategies that focus on the building envelope in different periods and climatic zones of Greece.
|
168 |
Comparative study of casting simulation softwares for future use during early stages of product developmentNavarro Aranda, Monica January 2015 (has links)
Within industrial product development processes there is an increasing demand towards reliable predictions of the material behavior, which aims to promote a property driven development that can reduce the lead times. The implementation of simulation based product development with integrated casting simulation may enable the design engineers to gain an early understanding of the products with relation to castability, and orient the subsequent design refinement so as to achieve the desired mechanical properties. This work investigates the suitability of three commercial casting simulation softwares –MAGMA 5.2, NovaFlow & Solid 4.7.5 (NFS) and Click2Cast 3.0 (C2C)–, with respect to the needs of design engineers, such as prediction of shrinkage porosity and mechanical properties with relation to the design. Simplified solidification simulations suitable for this stage were thus performed for three high pressure die cast components with different geometrical constraints. The comparability between the solidification and cooling behaviour predicted by the three softwares was studied, and showed that a reasonably good agreement between predicted solidification times by MAGMA and NFS could be obtained, albeit not between predictions by MAGMA and C2C. Predictions by the three softwares of the hot spot/porosity areas showed to have a good agreement. The calculation times by each software were compared, and MAGMA was seen to have the best performance, yielding significantly shorter times than NFS and C2C. The results obtained were also compared to experimental investigations of porosity, microstructural coarseness, and mechanical properties. There was a good agreement between the predicted hot spot areas –i.e. areas in the geometry that solidify last– and the findings of porosities in the actual castings, meaning that solidification simulations might be able to provide important information for the prediction of most of shrinkage related porosity locations that are related to the casting geometry. However, the lack of a detailed knowledge at the design stage of the casting process limits the possibilities to predict all porosities. The predicted microstructure and mechanical properties by MAGMA non-ferrous were seen to have a good agreement in trend with the experimental data, albeit the predicted values showed large differences in magnitude with the experimental data. Although, the MAGMA non-ferrous module was not developed for HPDC components, it was interesting to study if it could be applied in this context. However, the models seem to need adoption to the HPDC process and alloys. In conclusion, with a limited knowledge of the manufacturing parameters, simplified solidification simulations may still be able to provide reasonably reliable and useful information during early development stages in order to optimise the design of castings.
|
169 |
COMMANDE NON LINEAIRE SANS CAPTEUR DE LA MACHINE ASYNCHRONETraore, Dramane 19 November 2008 (has links) (PDF)
Cette thèse a pour but de proposer des lois de commande sans capteur mécanique de la machine asynchrone. Chaque loi de commande élaborée a été validée expérimentalement sur un benchmark industriel, prenant en compte les problèmes de la machine asynchrone à très basse vitesse. L'étude de l'observabilité montre que la machine asynchrone est inobservable à très basse vitesse lorsque la mesure de la vitesse n'est disponible. La synthèse d'observateurs pour la machine asynchrone sans capteur mécanique a été une des contributions principales de nos travaux. Dans un premier temps, un observateur interconnecté à grand gain a été conçu pour reconstruire les variables mécaniques (vitesse, couple de charge) et les variables magnétiques (flux). Dans un second temps un observateur adaptatif interconnecté a été synthétisé pour estimer en plus des variables mécaniques et magnétiques, la résistance statorique paramètre crucial à très basse vitesse. Les résultats sur le benchmark "Observateur sans capteur mécanique" ont montré une amélioration sensible des performances robustes. La conception de commandes non linéaires sans capteur mécanique pour la machine asynchrone constitue la contribution majeure de nos travaux en démontrant la stabilité globale de l'ensemble "Commande+Observateur" et avec validation expérimentale sur le benchmark "Commande sans capteur mécanique". Plusieurs lois de commande ont été conçues et comparées : type PI avec termes non linéaires, type modes glissants d'ordre un, puis d'ordre supérieur et enfin une loi de type backstepping. Ces deux dernières ont de bonnes performances tant en basse vitesse qu'en haute vitesse. Les résultats ont été comparés positivement à ceux d'un variateur industriel sur le benchmark "Commande sans capteur mécanique" : dans la zone inobservable, il est instable contrairement aux lois de commande que nous avons conçues
|
170 |
Automatické generování umělých XML dokumentů / Automatic Generation of Synthetic XML DocumentsBetík, Roman January 2015 (has links)
The aim of this thesis is to research the current possibilities and limitations of automatic generation of synthetic XML and JSON documents used in the area of Big Data. The first part of the work discusses the properties of the most used XML data generators, Big Data and JSON generators and compares them. The next part of the thesis proposes an algorithm for data generation of semistructured data. The main focus of the algorithm is on the parallel execution of the generation process while preserving the ability to control the contents of the generated documents. The data generator can also use samples of real data in the generation of the synthetic data and is also capable of automatic creation of simple references between JSON documents. The last part of the thesis provides the results of experiments with the data generator exploited for the purpose of testing database MongoDB, describes its added value and compares it to other solutions. Powered by TCPDF (www.tcpdf.org)
|
Page generated in 0.0552 seconds