• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 17
  • 8
  • 2
  • 1
  • 1
  • Tagged with
  • 32
  • 32
  • 8
  • 6
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Modelová optimalizace provozu bioplynové stanice / MODEL OPTIMIZING OF THE BIOGAS PLANT OPERATION

Raška, David January 2015 (has links)
Biogas plants are installations for converting of biomass to produce biogas, which is a valuable energy source if a proper management of an anaerobic process is kept. In the Czech Republic are mainly agricultural biogas plant. This work is focused on evaluation of data obtained from a specific biogas plant, which is located in the town Úpice. This plant processes biodegradable waste. The aim of this study was to determine which waste processed are beneficial for the production of biogas. The Data were analyzed by statistical program R. The method used was cluster analysis. For comparsion of clusters was used ANOVA method (analysis of variance). The results of data analysis showed a positive effect on the production of biogas by waste from industrial processing of potatoes, grass and corn silage, rumen contents and dairy waste. Key words: Biogas, plant, anaerobic, fermentation, biodegradable, waste, data analysis
2

Integrated feature, neighbourhood, and model optimization for personalised modelling and knowledge discovery

Liang, Wen January 2009 (has links)
“Machine learning is the process of discovering and interpreting meaningful information, such as new correlations, patterns and trends by sifting through large amounts of data stored in repositories, using pattern recognition technologies as well as statistical and mathematical techniques” (Larose, 2005). From my understanding, machine learning is a process of using different analysis techniques to observe previously unknown, potentially meaningful information, and discover strong patterns and relationships from a large dataset. Professor Kasabov (2007b) classified computational models into three categories (e.g. global, local, and personalised) which have been widespread and used in the areas of data analysis and decision support in general, and in the areas of medicine and bioinformatics in particular. Most recently, the concept of personalised modelling has been widely applied to various disciplines such as personalised medicine, personalised drug design for known diseases (e.g. cancer, diabetes, brain disease, etc.) as well as for other modelling problems in ecology, business, finance, crime prevention, and so on. The philosophy behind the personalised modelling approach is that every person is different from others, thus he/she will benefit from having a personalised model and treatment. However, personalised modelling is not without issues, such as defining the correct number of neighbours or defining an appropriate number of features. As a result, the principal goal of this research is to study and address these issues and to create a novel framework and system for personalised modelling. The framework would allow users to select and optimise the most important features and nearest neighbours for a new input sample in relation to a certain problem based on a weighted variable distance measure in order to obtain more precise prognostic accuracy and personalised knowledge, when compared with global modelling and local modelling approaches.
3

Performance Improvement of ED at VGH Using Simulation and Optimization

Zhao, Yuancheng 15 September 2013 (has links)
Emergency department(ED) is one of the busiest clinical units in Winnipeg Victoria Gen-eral Hospital (VGH) which faces the challenge of patients’ long waiting-time as increas-ing healthcare demand and limited resources. This research investigates the critical factors of the ED operation to enhance the operational efficiency using simulation modeling and optimization. The contribution of this research is the integration of simulation and optimization for the performance improvement of ED operations. Discrete-events simula-tion (DES) methodology provides a cost-effective tool to analyse the performance of the ED operations and evaluates the potential alternatives. Design of experiments (DOE) and Scatter search (SS) of model optimization are proposed to search the ED potential capaci-ty for waiting-time reduction. The patient-flow is accelerated along with the waiting-time reduction, which results in better efficient patient throughput in the ED. A specific strate-gy is suggested to improve the ED operation based on the simulation model.
4

Using Decision Tree Voting to Select a Polyhedral Model Loop Transformation

Ruvinskiy, Ray January 2013 (has links)
Algorithms in fields like image manipulation, sound and signal processing, and statistics frequently employ tight loops. These loops are computationally intensive and CPU-bound, making their performance highly dependent on efficient utilization of the CPU pipeline and memory bus. Recent years have seen CPU pipelines becoming more and more complicated, with features such as branch prediction and speculative execution. At the same time, clock speeds have stopped their prior exponential growth rate due to heat dissipation issues, and multiple cores have become prevalent. These developments have made it more difficult for developers to reason about how their code executes on the CPU, which in turn makes it difficult to write performant code. An automated method to take code and optimize it for most efficient execution would, therefore, be desirable. The Polyhedral Model allows the generation of alternative transformations for a loop nest that are semantically equivalent to the original. The transformations vary the degree of loop tiling, loop fusion, loop unrolling, parallelism, and vectorization. However, selecting the transformation that would most efficiently utilize the architecture remains challenging. Previous work utilizes regression models to select a transformation, using as features hardware performance counter values collected during a sample run of the program being optimized. Due to inaccuracies in the resulting regression model, the transformation selected by the model as the best transformation often yields unsatisfactory performance. As a result, previous work resorts to using a five-shot technique, which entails running the top five transformations suggested by the model and selecting the best one based on their actual runtime. However, for long-running benchmarks, five runs may be take an excessive amount of time. I present a variation on the previous approach which does not need to resort to the five-shot selection process to achieve performance comparable to the best five-shot results reported in previous work. With the transformations in the search space ranked in reverse runtime order, the transformation selected by my classifier is, on average, in the 86th percentile. There are several key contributing factors to the performance improvements attained by my method: formulating the problem as a classification problem rather than a regression problem, using static features in addition to dynamic performance counter features, performing feature selection, and using ensemble methods to boost the performance of the classifier. Decision trees are constructed from pairs of features (performance counters and structural features than can be determined statically from the source code). The trees are then evaluated according to the number of benchmarks for which they select a transformation that performs better than two baseline variants, the original program and the expected runtime if a randomly selected transformation were applied. The top 20 trees vote to select a final transformation.
5

Performance Improvement of ED at VGH Using Simulation and Optimization

Zhao, Yuancheng 15 September 2013 (has links)
Emergency department(ED) is one of the busiest clinical units in Winnipeg Victoria Gen-eral Hospital (VGH) which faces the challenge of patients’ long waiting-time as increas-ing healthcare demand and limited resources. This research investigates the critical factors of the ED operation to enhance the operational efficiency using simulation modeling and optimization. The contribution of this research is the integration of simulation and optimization for the performance improvement of ED operations. Discrete-events simula-tion (DES) methodology provides a cost-effective tool to analyse the performance of the ED operations and evaluates the potential alternatives. Design of experiments (DOE) and Scatter search (SS) of model optimization are proposed to search the ED potential capaci-ty for waiting-time reduction. The patient-flow is accelerated along with the waiting-time reduction, which results in better efficient patient throughput in the ED. A specific strate-gy is suggested to improve the ED operation based on the simulation model.
6

Integrated feature, neighbourhood, and model optimization for personalised modelling and knowledge discovery

Liang, Wen January 2009 (has links)
“Machine learning is the process of discovering and interpreting meaningful information, such as new correlations, patterns and trends by sifting through large amounts of data stored in repositories, using pattern recognition technologies as well as statistical and mathematical techniques” (Larose, 2005). From my understanding, machine learning is a process of using different analysis techniques to observe previously unknown, potentially meaningful information, and discover strong patterns and relationships from a large dataset. Professor Kasabov (2007b) classified computational models into three categories (e.g. global, local, and personalised) which have been widespread and used in the areas of data analysis and decision support in general, and in the areas of medicine and bioinformatics in particular. Most recently, the concept of personalised modelling has been widely applied to various disciplines such as personalised medicine, personalised drug design for known diseases (e.g. cancer, diabetes, brain disease, etc.) as well as for other modelling problems in ecology, business, finance, crime prevention, and so on. The philosophy behind the personalised modelling approach is that every person is different from others, thus he/she will benefit from having a personalised model and treatment. However, personalised modelling is not without issues, such as defining the correct number of neighbours or defining an appropriate number of features. As a result, the principal goal of this research is to study and address these issues and to create a novel framework and system for personalised modelling. The framework would allow users to select and optimise the most important features and nearest neighbours for a new input sample in relation to a certain problem based on a weighted variable distance measure in order to obtain more precise prognostic accuracy and personalised knowledge, when compared with global modelling and local modelling approaches.
7

Integrated feature, neighbourhood, and model optimization for personalised modelling and knowledge discovery

Liang, Wen January 2009 (has links)
“Machine learning is the process of discovering and interpreting meaningful information, such as new correlations, patterns and trends by sifting through large amounts of data stored in repositories, using pattern recognition technologies as well as statistical and mathematical techniques” (Larose, 2005). From my understanding, machine learning is a process of using different analysis techniques to observe previously unknown, potentially meaningful information, and discover strong patterns and relationships from a large dataset. Professor Kasabov (2007b) classified computational models into three categories (e.g. global, local, and personalised) which have been widespread and used in the areas of data analysis and decision support in general, and in the areas of medicine and bioinformatics in particular. Most recently, the concept of personalised modelling has been widely applied to various disciplines such as personalised medicine, personalised drug design for known diseases (e.g. cancer, diabetes, brain disease, etc.) as well as for other modelling problems in ecology, business, finance, crime prevention, and so on. The philosophy behind the personalised modelling approach is that every person is different from others, thus he/she will benefit from having a personalised model and treatment. However, personalised modelling is not without issues, such as defining the correct number of neighbours or defining an appropriate number of features. As a result, the principal goal of this research is to study and address these issues and to create a novel framework and system for personalised modelling. The framework would allow users to select and optimise the most important features and nearest neighbours for a new input sample in relation to a certain problem based on a weighted variable distance measure in order to obtain more precise prognostic accuracy and personalised knowledge, when compared with global modelling and local modelling approaches.
8

Modelová optimalizace provozu bioplynové stanice po ověřovacích fázích provozu / Model optimizing of the biogas plant operation after testing operational phases

Raška, David January 2015 (has links)
Biogasplants are installations for converting of biomass to produce biogas, which is a valuable energy source if a proper management of ananaerobic process is kept. In the Czech Republic are mainly agricultural biogas plants. This workis focused on evaluation of data obtained from a specific biogas plant, which islocated in the town Úpice. This plant processes biodegradable waste. Data are evaluated qualitatively, also with use of cluster analysis using statistical program R as well as by a proposed linear regression models in Matlab software. Results were applied for suggestins of several measures for the management of biogas plant and for creation of non-linea rregression model, which can be further developed. . Keywords: Biogas, plant, anaerobic, fermentation, biodegradable, waste, data analysis, differential model
9

Study of Compound Gauss-Markov Image Field

Lin, Chi-Shing 04 September 2004 (has links)
In this thesis, we have a comprehensive study of the famous compound Gauss-Markov image model. In this model, a pixel in the image random field is determined by the surrounding pixels according to a predetermined line field. This model is useful in image restoration by applying two steps iteratively: restoring the line field by the assumed image field and restoring the image field by the just computed line field. CGM (Compound Gauss-Markov) image modeling is characterized by the line fields and the generating noise. In this thesis we apply combinations of techniques such as changing processing order, immediate updating, probability determination and different methods to find the best modeling. Furthermore, the effects of the above modeling are demonstrated by its energy, visual quality, and error resistance. Finally, by solving a set of nonlinear equations we apply the CGM model to an image restoration problem for image corrupted by a dusted lens.
10

Investigation on Gauss-Markov Image Modeling

You, Jhih-siang 30 August 2006 (has links)
Image modeling is a foundation for many image processing applications. The compound Gauss-Markov (CGM) image model has been proven useful in picture restoration for natural images. In contrast, other Markov Random Fields (MRF) such as Gaussian MRF models are specialized on segmentation for texture image. The CGM image is restored in two steps iteratively: restoring the line field by the assumed image field and restoring the image field by the just computed line field. The line fields are most important for a successful CGM modeling. A convincing line fields should be fair on both fields: horizontal and vertical lines. The working order and update occasions have great effects on the results of line fields in iterative computation procedures. The above two techniques are the basic for our research in finding the best modeling for CGM. Besides, we impose an extra condition for a line to exist to compensate the bias of line fields. This condition is based upon a requirement of a brightness contrast on the line field. Our best modeling is verified by the effect of image restoration in visual quality and numerical values for natural images. Furthermore, an artificial image generated by CGM is tested to prove that our best modeling is correct.

Page generated in 0.0772 seconds