• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 22
  • 7
  • 6
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 50
  • 20
  • 11
  • 10
  • 9
  • 8
  • 8
  • 7
  • 7
  • 6
  • 6
  • 6
  • 6
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Demand side management of a run-of-mine ore milling circuit

Matthews, Bjorn January 2015 (has links)
In South Africa, where 75% of the worlds platinum is produced, electricity tariffs have increased significantly over recent years. This introduces challenges to the energy intensive mineral processing industry. Within the mineral processing chain, run-of-mine ore milling circuits are the most energy-intensive unit processes. Opportunities to reduce the operating costs associated with power consumption through process control are explored in this work. In order to reduce operating costs, demand side management was implemented on a milling circuit using load shifting. Time-of-use tariffs were exploited by shifting power consumption of the milling circuit from more expensive to cheaper tariff periods in order to reduce overall costs associated with electricity consumption. Reduced throughput during high tariff periods was recovered during low tariff periods in order to maintain milling circuit throughput over a week long horizon. In order to implement and evaluate demand side management through process control, a load shifting controller was developed for the non-linear Hulbert model. Implementation of the load shifting controller was achieved through a multi-layered control approach. A regulatory linear MPC controller was developed to address technical control requirements such as milling circuit stability. A supervisory real-time optimizer was developed to meet economic control requirements such as reducing electricity costs while maintaining throughput. Scenarios, designed to evaluate the sensitivities of the load shifting controller, showed interesting results. Mill power set-point optimization was found to be proportionally related to the mineral price. Set-points were not sensitive to absolute electricity costs but rather to the relationships between peak, standard, and off-peak electricity costs. The load shifting controller was most effective at controlling the milling circuit where weekly throughput was between approximately 90% and 100% of the maximum throughput capacity. From an economic point of view, it is shown that for milling circuits that are not throughput constrained, load shifting can reduce operating costs associated with electricity consumption. Simulations performed indicate that realizable cost savings are between R16.51 and R20.78 per gram of unrefined platinum processed by the milling circuit. This amounts to a potential annual cost saving of up to R1.89 m for a milling circuit that processes 90 t/h at a head grade of 3 g/t. / Dissertation (MEng)--University of Pretoria, 2015. / Electrical, Electronic and Computer Engineering / Unrestricted
32

[pt] APRENDIZADO EM DOIS ESTÁGIOS PARA MÉTODOS DE COMITÉ DE ÁRVORES DE DECISÃO / [en] TWO-STAGE LEARNING FOR TREE ENSEMBLE METHODS

ALEXANDRE WERNECK ANDREZA 23 November 2020 (has links)
[pt] Tree ensemble methods são reconhecidamente métodos de sucesso em problemas de aprendizado supervisionado, bem como são comumente descritos como métodos resistentes ao overfitting. A proposta deste trabalho é investigar essa característica a partir de modelos que extrapolem essa resistência. Ao prever uma instância de exemplo, os métodos de conjuntos são capazes de identificar a folha onde essa instância ocorre em cada uma das árvores. Nosso método então procura identificar uma nova função sobre todas as folhas deste conjunto, minimizando uma função de perda no conjunto de treino. Uma das maneiras de definir conceitualmente essa proposta é interpretar nosso modelo como um gerador automático de features ou um otimizador de predição. / [en] In supervised learning, tree ensemble methods have been recognized for their high level performance in a wide range of applications. Moreover, several references report such methods to present a resistance of to overfitting. This work investigates this observed resistance by proposing a method that explores it. When predicting an instance, tree ensemble methods determines the leaf of each tree where the instance falls. The prediction is then obtained by a function of these leaves, minimizing a loss function or an error estimator for the training set, overfitting in the learning phase in some sense. This method can be interpreted either as an Automated Feature Engineering or a Predictor Optimization.
33

Resource-Constrained Project Scheduling with Autonomous Learning Effects

Ticktin, Jordan M 01 December 2019 (has links) (PDF)
It's commonly assumed that experience leads to efficiency, yet this is largely unaccounted for in resource-constrained project scheduling. This thesis considers the idea that learning effects could allow selected activities to be completed within reduced time, if they're scheduled after activities where workers learn relevant skills. This paper computationally explores the effect of this autonomous, intra-project learning on optimal makespan and problem difficulty. A learning extension is proposed to the standard RCPSP scheduling problem. Multiple parameters are considered, including project size, learning frequency, and learning intensity. A test instance generator is developed to adapt the popular PSPLIB library of scheduling problems to this model. Four different Constraint Programming model formulations are developed to efficiently solve the model. Bounding techniques are proposed for tightening optimality gaps, including four lower bounding model relaxations, an upper bounding model relaxation, and a Destructive Lower Bounding method. Hundreds of thousands of scenarios are tested to empirically determine the most efficient solution approaches and the impact of learning on project schedules. Potential makespan reduction as high as 50% is discovered, with the learning effects resembling a learning curve with a point of diminishing returns. A combination of bounding techniques is proven to produce significantly tighter optimality gaps.
34

Multi-objective day-ahead scheduling of microgrids using modified grey wolf optimizer algorithm

Javidsharifi, M., Niknam, T., Aghaei, J., Mokryani, Geev, Papadopoulos, P. 10 August 2018 (has links)
Yes / Investigation of the environmental/economic optimal operation management of a microgrid (MG) as a case study for applying a novel modified multi-objective grey wolf optimizer (MMOGWO) algorithm is presented in this paper. MGs can be considered as a fundamental solution in order for distributed generators’ (DGs) management in future smart grids. In the multi-objective problems, since the objective functions are conflict, the best compromised solution should be extracted through an efficient approach. Accordingly, a proper method is applied for exploring the best compromised solution. Additionally, a novel distance-based method is proposed to control the size of the repository within an aimed limit which leads to a fast and precise convergence along with a well-distributed Pareto optimal front. The proposed method is implemented in a typical grid-connected MG with non-dispatchable units including renewable energy sources (RESs), along with a hybrid power source (micro-turbine, fuel-cell and battery) as dispatchable units, to accumulate excess energy or to equalize power mismatch, by optimal scheduling of DGs and the power exchange between the utility grid and storage system. The efficiency of the suggested algorithm in satisfying the load and optimizing the objective functions is validated through comparison with different methods, including PSO and the original GWO. / Supported in part by Royal Academy of Engineering Distinguished Visiting Fellowship under Grant DVF1617\6\45
35

Reduction Of Query Optimizer Plan Diagrams

Darera, Pooja N 12 1900 (has links)
Modern database systems use a query optimizer to identify the most efficient strategy, called "plan", to execute declarative SQL queries. Optimization is a mandatory exercise since the difference between the cost of best plan and a random choice could be in orders of magnitude. The role of query optimization is especially critical for the decision support queries featured in data warehousing and data mining applications. For a query on a given database and system configuration, the optimizer's plan choice is primarily a function of the selectivities of the base relations participating in the query. A pictorial enumeration of the execution plan choices of a database query optimizer over this relational selectivity space is called a "plan diagram". It has been shown recently that these diagrams are often remarkably complex and dense, with a large number of plans covering the space. An interesting research problem that immediately arises is whether complex plan diagrams can be reduced to a significantly smaller number of plans, without materially compromising the query processing quality. The motivation is that reduced plan diagrams provide several benefits, including quantifying the redundancy in the plan search space, enhancing the applicability of parametric query optimization, identifying error-resistant and least-expected-cost plans, and minimizing the overhead of multi-plan approaches. In this thesis, we investigate the plan diagram reduction issue from theoretical, statistical and empirical perspectives. Our analysis shows that optimal plan diagram reduction, w.r.t. minimizing the number of plans in the reduced diagram, is an NP-hard problem, and remains so even for a storage-constrained variation. We then present CostGreedy, a greedy reduction algorithm that has tight and optimal performance guarantees, and whose complexity scales linearly with the number of plans in the diagram. Next, we construct an extremely fast estimator, AmmEst, for identifying the location of the best tradeoff between the reduction in plan cardinality and the impact on query processing quality. Both CostGreedy and AmmEst have been incorporated in the publicly-available Picasso optimizer visualization tool. Through extensive experimentation with benchmark query templates on industrial-strength database optimizers, we demonstrate that with only a marginal increase in query processing costs, CostGreedy reduces even complex plan diagrams running to hundreds of plans to "anorexic" levels (small absolute number of plans). While these results are produced using a highly conservative upper-bounding of plan costs based on a cost monotonicity constraint, when the costing is done on "actuals" using remote plan costing, the reduction obtained is even greater - in fact, often resulting in a single plan in the reduced diagram. We also highlight how anorexic reduction provides enhanced resistance to selectivity estimate errors, a long-standing bane of good plan selection. In summary, this thesis demonstrates that complex plan diagrams can be efficiently converted to anorexic reduced diagrams, a result with useful implications for the design and use of next-generation database query optimizers.
36

Efficient processing of multiway spatial join queries in distributed systems / Processamento eficiente de consultas de multi-junção espacial em sistemas distribuídos

Oliveira, Thiago Borges de 29 November 2017 (has links)
Submitted by Franciele Moreira (francielemoreyra@gmail.com) on 2017-12-12T16:13:05Z No. of bitstreams: 2 Tese - Thiago Borges de Oliveira - 2017.pdf: 1684209 bytes, checksum: f64b32084ca6b13a58109e4d2cffe541 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Approved for entry into archive by Luciana Ferreira (lucgeral@gmail.com) on 2017-12-13T09:33:57Z (GMT) No. of bitstreams: 2 Tese - Thiago Borges de Oliveira - 2017.pdf: 1684209 bytes, checksum: f64b32084ca6b13a58109e4d2cffe541 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) / Made available in DSpace on 2017-12-13T09:33:57Z (GMT). No. of bitstreams: 2 Tese - Thiago Borges de Oliveira - 2017.pdf: 1684209 bytes, checksum: f64b32084ca6b13a58109e4d2cffe541 (MD5) license_rdf: 0 bytes, checksum: d41d8cd98f00b204e9800998ecf8427e (MD5) Previous issue date: 2017-11-29 / Multiway spatial join is an important type of query in spatial data processing, and its efficient execution is a requirement to move spatial data analysis to scalable platforms as has already happened with relational and unstructured data. In this thesis, we provide a set of comprehensive models and methods to efficiently execute multiway spatial join queries in distributed systems. We introduce a cost-based optimizer that is able to select a good execution plan for processing such queries in distributed systems taking into account: the partitioning of data based on the spatial attributes of datasets; the intra-operator level of parallelism, which enables high scalability; and the economy of cluster resources by appropriately scheduling the queries before execution. We propose a cost model based on relevant metadata about the spatial datasets and the data distribution, which identifies the pattern of costs incurred when processing a query in this environment. We formalized the distributed multiway spatial join plan scheduling problem as a bi-objective linear integer model, considering the minimization of both the makespan and the communication cost as objectives. Three methods are proposed to compute schedules based on this model that significantly reduce the resource consumption required to process a query. Although targeting multiway spatial join query scheduling, these methods can be applied to other kinds of problems in distributed systems, notably problems that require both the alignment of data partitions and the assignment of jobs to machines. Additionally, we propose a method to control the usage of resources and increase system throughput in the presence of constraints on the network or processing capacity. The proposed cost-based optimizer was able to select good execution plans for all queries in our experiments, using public datasets with a significant range of sizes and complex spatial objects. We also present an execution engine that is capable of performing the queries with near-linear scalability with respect to execution time. / A multi-junção espacial é um tipo importante de consulta usada no processamento de dados espaciais e sua execução eficiente é um requisito para mover a análise de dados espaciais para plataformas escaláveis, assim como aconteceu com dados relacionais e não estruturados. Nesta tese, propomos um conjunto de modelos e métodos para executar eficientemente consultas de multi-junção espacial em sistemas distribuídos. Apresentamos um otimizador baseado em custos que seleciona um bom plano de execução levando em consideração: o particionamento de dados com base nos atributos espaciais dos datasets; o nível de paralelismo intra-operador que proporciona alta escalabilidade; e o escalonamento das consultas antes da execução que resulta em economia de recursos computacionais. Propomos um modelo de custo baseado em metadados dos datasets e da distribuição de dados, que identifica o padrão de custos incorridos no processamento de uma consulta neste ambiente. Formalizamos o problema de escalonamento de planos de execução da multi-junção espacial distribuída como um modelo linear inteiro bi-objetivo, que minimiza tanto o custo de processamento quanto o custo de comunicação. Propomos três métodos para gerar escalonamentos a partir deste modelo, os quais reduzem significativamente o consumo de recursos no processamento das consultas. Embora projetados para o escalonamento da multi-junção espacial, esses métodos podem também ser aplicados a outros tipos de problemas em sistemas distribuídos, que necessitam do alinhamento de partições de dados e da distribuição de tarefas a máquinas de forma balanceada. Além disso, propomos um método para controlar o uso de recursos e aumentar a vazão do sistema na presença de restrições nas capacidades da rede ou de processamento. O otimizador proposto foi capaz de selecionar bons planos de execução para todas as consultas em nossos experimentos, as quais usaram datasets públicos com uma variedade significativa de tamanhos e de objetos espaciais complexos. Apresentamos também uma máquina de execução, capaz de executar as consultas com escalabilidade próxima de linear em relação ao tempo de execução.
37

Conversion Rate Optimization with A/B Testing / Optimalizace obchodní výkonnosti webu pomocí A/B testování

Pařízek, Michal January 2010 (has links)
The goal of this Master's thesis is to explain the importance of the Conversion Rate Optimization (CRO) at today's E-commerce market share. A lot of people in Czech Republic confuse this term with more popular and similar term: Search Engine Optimization (SEO). The CRO focuses on the particular website and its ability to meet the business goals. The goal of the CRO is to improve the website and its conversion rates for acceptable costs. The ROI is important here. The CRO is a broad topic so I focus only on one part -- A/B Testing. This technique is based on showing different variations of a specific web page to the users. They are divided among these variations. Each part can see different variation. In the result we can see which variation was the best in meeting the business goals. The goal of this Master's thesis is to thoroughly introduce this technique and the tool which is perfectly suitable for that -- Google Website Optimizer. It is free and worldwide known and used. This Master's thesis is divided into several main chapters. In the introduction I describe few marketing models that are closely related to the CRO. Then I focus on the CRO, A/B Testing and Google Website Optimizer. In the last chapter you can find a case study full of practical examples. So far there are only a few resources about CRO in the Czech language. That also demonstrates the fact, that the CRO is not much popular in Czech Republic so far. On the contrary in the USA or Western Europe these techniques are commonly used. Not only big companies like Google or Amazon.com, but even small businesses use the techniques of CRO and can profit from it. I think it is only the matter of time it becomes more popular also in the Czech Republic. Therefore I believe a lot of people find my Master's thesis very useful.
38

Multi-guided particle swarm optimization : a multi-objective particle swarm optimizer

Scheepers, Christiaan January 2017 (has links)
An exploratory analysis in low-dimensional objective space of the vector evaluated particle swarm optimization (VEPSO) algorithm is presented. A novel visualization technique is presented and applied to perform the exploratory analysis. The exploratory analysis together with a quantitative analysis revealed that the VEPSO algorithm continues to explore without exploiting the well-performing areas of the search space. A detailed investigation into the influence that the choice of archive implementation has on the performance of the VEPSO algorithm is presented. Both the Pareto-optimal front (POF) solution diversity and convergence towards the true POF is considered during the investigation. Attainment surfaces are investigated for their suitability in efficiently comparing two multi-objective optimization (MOO) algorithms. A new measure to objectively compare algorithms in multi-dimensional objective space, based on attainment surfaces, is presented. This measure, referred to as the porcupine measure, adapts the attainment surface measure by using a statistical test along with weighted intersection lines. Loosely based on the VEPSO algorithm, the multi-guided particle swarm optimization (MGPSO) algorithm is presented and evaluated. The results indicate that the MGPSO algorithm overcomes the weaknesses of the VEPSO algorithm and also outperforms a number of state of the art MOO algorithms on at least two benchmark test sets. / Thesis (PhD)--University of Pretoria, 2017. / Computer Science / PhD / Unrestricted
39

Methods for Accurately Modeling Complex Materials

Nicklas, Jeremy William Charles 24 July 2013 (has links)
No description available.
40

Book retrieval system : Developing a service for efficient library book retrievalusing particle swarm optimization

Woods, Adam January 2024 (has links)
Traditional methods for locating books and resources in libraries often entail browsing catalogsor manual searching that are time-consuming and inefficient. This thesis investigates thepotential of automated digital services to streamline this process, by utilizing Wi-Fi signal datafor precise indoor localization. Central to this study is the development of a model that employsWi-Fi signal strength (RSSI) and round-trip time (RTT) to estimate the locations of library userswith arm-length accuracy. This thesis aims to enhance the accuracy of location estimation byexploring the complex, nonlinear relationship between Received Signal Strength Indicator(RSSI) and Round-Trip Time (RTT) within signal fingerprints. The model was developed usingan artificial neural network (ANN) to capture the relationship between RSSI and RTT. Besides,this thesis introduces and evaluates the performance of a novel variant of the Particle SwarmOptimization (PSO) algorithm, named Randomized Particle Swarm Optimization (RPSO). Byincorporating randomness into the conventional PSO framework, the RPSO algorithm aims toaddress the limitations of the standard PSO, potentially offering more accurate and reliablelocation estimations. The PSO algorithms, including RPSO, were integrated into the trainingprocess of ANN to optimize the network’s weights and biases through direct optimization, aswell as to enhance the hyperparameters of the ANN’s built-in optimizer. The findings suggestthat optimizing the hyperparameters yields better results than direct optimization of weights andbiases. However, RPSO did not significantly enhance the performance compared to thestandard PSO in this context, indicating the need for further investigation into its application andpotential benefits in complex optimization scenarios.

Page generated in 0.0378 seconds