Spelling suggestions: "subject:" aptimization problems"" "subject:" anoptimization problems""
51 |
Uma proposta lúdica com utilização do GeoGebra para o estudo de funções quadráticas e probabilidade geométricaCanavezi, Leandro Souza 17 September 2016 (has links)
Submitted by Livia Mello (liviacmello@yahoo.com.br) on 2016-10-11T18:51:01Z
No. of bitstreams: 1
DissLSC.pdf: 11398801 bytes, checksum: cc16fd83109503a52839256a9f8c624c (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-21T12:18:54Z (GMT) No. of bitstreams: 1
DissLSC.pdf: 11398801 bytes, checksum: cc16fd83109503a52839256a9f8c624c (MD5) / Approved for entry into archive by Marina Freitas (marinapf@ufscar.br) on 2016-10-21T12:19:01Z (GMT) No. of bitstreams: 1
DissLSC.pdf: 11398801 bytes, checksum: cc16fd83109503a52839256a9f8c624c (MD5) / Made available in DSpace on 2016-10-21T12:19:09Z (GMT). No. of bitstreams: 1
DissLSC.pdf: 11398801 bytes, checksum: cc16fd83109503a52839256a9f8c624c (MD5)
Previous issue date: 2016-09-17 / Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES) / This paper reports the idealization, planning, construction and implementation of activities for the study of quadratic functions and geometric
probability for classes of 9th grade of elementary school. It also presents the analysis of the activities undertaken by pupils and conclusions about the proposed objectives and goals achieved. The main objective of the developed
activities is to provide students a better learning content covered and issues through a playful approach, interactive and motivating. The specific objectives are to develop the ability to translate a mathematical problem in mathematical
language, manipulate algebraic expressions, estimates and comparisons, develop mathematical knowledge as knowing how to express and calculate the area and perimeter of plane figures, calculating probabilities of random events, solve quadratic equations, plotting graphs of quadratic functions and manipulate the software or the GeoGebra application. For this we have created a game of darts adapted and activity sheets containing instructions, questions, tables, graphs, calculation exercises, optimization problems and graphic constructions scripts applied to GeoGebra. The methodology used was the Didactic Engineering. The activities
were implemented in two classes of 9th grade of elementary school in two different schools, one class at a school in the municipal school of Bauru, São Paulo, and another class of a school in the state school system the city of Agudos, state of São Paulo. During the application were used 12 50-minute lessons in two classes, with six days of double classes, in which students actively
participated in all activities. Our work makes reference to the National Curriculum
Parameters (PCN) and other documents governing the teaching of mathematics
in public schools in Brazil. We recommend and authorize reproduction of these activities for didactic purposes. / Esta dissertação relata a idealização, o planejamento, a construção e a aplicação de atividades para o estudo de funções quadráticas e probabilidade geométrica para turmas de 9.o ano do ensino fundamental. Também apresenta as análises das atividades realizadas pelos alunos e as conclusões acerca dos objetivos propostos e dos objetivos alcançados. O
objetivo principal das atividades elaboradas é proporcionar aos alunos uma melhor aprendizagem dos conteúdos e temas abordados através de uma abordagem lúdica, interativa e motivadora. Os objetivos específicos são
desenvolver a capacidade de traduzir um problema matemático na linguagem
matemática, manipular expressões algébricas, fazer estimativas e comparações, desenvolver conhecimentos matemáticos como saber expressar e calcular a área e o perímetro de figuras planas, calcular probabilidades de ocorrência de eventos aleatórios, resolver equações quadráticas, traçar gráficos de funções
quadráticas e manipular o software ou o aplicativo GeoGebra. Para isto criamos
um jogo de dardos adaptado e fichas de atividades contendo instruções, questões, tabelas, gráficos, exercícios de cálculos, problemas de otimização e roteiros de construções gráficas aplicadas ao GeoGebra.
A metodologia utilizada neste trabalho foi a Engenharia Didática. As atividades foram aplicadas em duas turmas de 9.o ano do ensino fundamental de duas escolas diferentes, sendo uma turma de uma escola da rede municipal
de ensino de Bauru, estado de São Paulo, e outra turma de uma escola da rede estadual de ensino da cidade de Agudos, estado de São Paulo. Durante a aplicação foram utilizadas 12 aulas de 50 minutos nas duas turmas, sendo 6 dias de aulas duplas, nas quais os alunos participaram ativamente de todas as atividades. Nosso trabalho tem como referência os Parâmetros Curriculares Nacionais (PCN) e outros documentos que regem o ensino de matemática nas escolas públicas do Brasil. Recomendamos e autorizamos a reprodução destas atividades para fins didáticos.
|
52 |
Objeto de aprendizagem para o ensino de algoritmos solucionadores de problemas de otimização em redesLourenço, Wilson Da Silva 26 February 2015 (has links)
Submitted by Nadir Basilio (nadirsb@uninove.br) on 2015-07-17T15:18:49Z
No. of bitstreams: 1
Wilson da Silva Lourenco.pdf: 1321079 bytes, checksum: ea090b0df77d0c04ef1dde30e7b41558 (MD5) / Made available in DSpace on 2015-07-17T15:18:49Z (GMT). No. of bitstreams: 1
Wilson da Silva Lourenco.pdf: 1321079 bytes, checksum: ea090b0df77d0c04ef1dde30e7b41558 (MD5)
Previous issue date: 2015-02-26 / The network optimization problems (NOP) are common to several areas such as engineering, transport and telecommunications, and have been objects of intense research and studies. Among the classical NOP are the problems of Shortest Path (SPP), Max Flow (MFP) and Traveling Salesman (TSP), which are usually studied in undergraduate and graduate courses such as Industrial Engineering, Computer Science, Information Systems and Logistics, with the use of resources such as chalk and blackboard that hinder the teacher's work, in the sense of showing the functioning of algorithms that solve these problems while maintaining students' motivation for learning. In this context, it is proposed in this research, a computational tool, characterized as a Learning Object (OA) and called TASNOP - Teaching Algorithms for Solving Network Optimization Problems, whose purpose is to contribute to students' understanding about concepts from NOP and, mainly, the functioning of algorithms A*, Greedy Search and Dijkstra used for resolution of SPP, Ford-Fulkerson employed in the resolution of MFP and the Nearest Neighbor to solve the TSP. It is important to highlight that the proposed OA can be accessed through web and also employed in distance learning environments (DLE). Experiments conducted in 2014 with 129 students of Computer Science, from which 51 performed an exercise using the TASNOP and 78 without this tool, confirm that students who used the TASNOP performed better in solving the proposed exercise, corroborating the idea that the OA helped to improve their understanding about the algorithms discussed in this research. In addition, the 51 students who employed the TASNOP answered a questionnaire about it use and, the answers indicated that the TASNOP shows a potential to be used as a learning support tool. / Os problemas de otimização em redes (POR) são comuns a diversas áreas como engenharia, transportes e telecomunicações, e têm sido objetos de intensas pesquisas e estudos. Entre os POR clássicos estão os problemas de Caminho Mínimo (PCM), Fluxo Máximo (PFM) e Caixeiro Viajante (PCV), os quais normalmente são estudados em cursos de graduação e pós-graduação tais como Engenharia de Produção, Ciência da Computação, Sistemas de Informação e Logística, com a utilização de recursos como giz e lousa, o que dificulta o trabalho do professor, no sentido de mostrar o funcionamento dos algoritmos que solucionam esses problemas, mantendo a motivação dos alunos para a aprendizagem. Neste contexto, propõe-se nesta pesquisa, uma ferramenta computacional, caracterizada como um Objeto de Aprendizagem (OA) denominado TASNOP - Teaching Algorithms for Solving Network Optimization Problems, cuja finalidade é contribuir para compreensão dos alunos sobre conceitos de POR e, principalmente, sobre o funcionamento dos algoritmos A*, Busca Gulosa, e Dijkstra, usados para resolução do PCM, Ford-Fulkerson empregado na resolução de PFM e o algoritmo Vizinho mais Próximo para resolução do PCV. É importante ressaltar que o OA proposto pode ser acessado via web e, inclusive, ser acoplado em ambientes de ensino a distância (EaD). Experimentos realizados no ano de 2014 envolvendo 129 alunos do curso de Ciência da Computação, dos quais 51 resolveram um exercício com o uso do TASNOP e 78 sem o seu uso, permitiram verificar que os alunos que utilizaram o TASNOP obtiveram melhor desempenho na resolução do exercício proposto, corroborando a ideia de que o OA contribuiu para melhorar suas compreensões acerca dos algoritmos abordados nesta pesquisa. Em adição, os 51 alunos que usaram o TASNOP responderam a um questionário sobre o seu uso e, com base nessas respostas, ficou evidente o potencial do TASNOP como uma ferramenta de apoio ao ensino.
|
53 |
Autonomous Probabilistic Hardware for Unconventional ComputingRafatul Faria (8771336) 29 April 2020 (has links)
In this thesis, we have proposed a new computing platform called probabilistic spin logic (PSL) based on probabilistic bits (p-bit) using low barrier nanomagnets (LBM) whose thermal barrier is of the order of a kT unlike conventional memory and spin logic devices that rely on high thermal barrier magnets (40-60 kT) to retain stability. p-bits are tunable random number generators (TRNG) analogous to the concept of binary stochastic neurons (BSN) in artificial neural network (ANN) whose output fluctuates between a +1 and -1 states with 50-50 probability at zero input bias and the stochastic output can be tuned by an applied input producing a sigmoidal characteristic response. p-bits can be interconnected by a synapse or weight matrix [J] to build p-circuits for solving a wide variety of complex unconventional problems such as inference, invertible Boolean logic, sampling and optimization. It is important to update the p-bits sequentially for proper operation where each p-bit update is informed of the states of other p-bits that it is connected to and this requires the use of sequencers in digital clocked hardware. But the unique feature of our probabilistic hardware is that they are autonomous that runs without any clocks or sequencers.<br>To ensure the necessary sequential informed update in our autonomous hardware it is important that the synapse delay is much smaller than the neuron fluctuation time.<br>We have demonstrated the notion of this autonomous hardware by SPICE simulation of different designs of low barrier nanomagnet based p-circuits for both symmetrically connected Boltzmann networks and directed acyclic Bayesian networks. It is interesting to note that for Bayesian networks a specific parent to child update order is important and requires specific design rule in the autonomous probabilistic hardware to naturally ensure the specific update order without any clocks. To address the issue of scalability of these autonomous hardware we have also proposed and benchmarked compact models for two different hardware designs against SPICE simulation and have shown that the compact models faithfully mimic the dynamics of the real hardware.<br>
|
54 |
Partial preference models in discrete multi-objective optimization / Intégration de préférences expertes en optimisation multicritèreKaddani, Sami 10 March 2017 (has links)
Les problèmes d’optimisation multi-objectifs mènent souvent à considérer des ensembles de points non-dominés très grands à mesure que la taille et le nombre d’objectifs du problème augmentent. Générer l’ensemble de ces points demande des temps de calculs prohibitifs. De plus, la plupart des solutions correspondantes ne sont pas pertinentes pour un décideur. Une autre approche consiste à utiliser des informations de préférence, ce qui produit un nombre très limité de solutions avec des temps de calcul réduits. Cela nécessite la plupart du temps une élicitation précise de paramètres. Cette étape est souvent difficile pour un décideur et peut amener à délaisser certaines solutions intéressantes. Une approche intermédiaire consiste à raisonner avec des relations de préférences construites à partir d’informations partielles. Nous présentons dans cette thèse plusieurs modèles de relations partielles de préférences. En particulier, nous nous sommes intéressés à la génération de l’ensemble des points non-dominés selon ces relations. Les expérimentations démontrent la pertinence de notre approche en termes de temps de calcul et qualité des points générés. / Multi-objective optimization problems often lead to large nondominated sets, as the size of the problem or the number of objectives increases. Generating the whole nondominated set requires significant computation time, while most of the corresponding solutions are irrelevant to the decision maker. Another approach consists in obtaining preference information, which reduces the computation time and produces one or a very limited number of solutions. This requires the elicitation of precise preference parameters most of the time, which is often difficult and partly arbitrary, and might discard solutions of interest. An intermediate approach consists in using partial preference models.In this thesis, we present several partial preference models. We especially focused on the generation of the nondominated set according to these preference relations. This approach shows competitive performances both on computation time and quality of the generated preferred sets.
|
55 |
A Comparative Study on Optimization Algorithms and its efficiencyAhmed Sheik, Kareem January 2022 (has links)
Background: In computer science, optimization can be defined as finding the most cost-effective or notable achievable performance under certain circumstances, maximizing desired factors, and minimizing undesirable results. Many problems in the real world are continuous, and it isn't easy to find global solutions. However, computer technological development increases the speed of computations [1]. The optimization method, an efficient numerical simulator, and a realistic depiction of physical operations that we intend to describe and optimize for any optimization issue are all interconnected components of the optimization process [2]. Objectives: A literature review on existing optimization algorithms is performed. Ten different benchmark functions are considered and are implemented on the existing chosen algorithms like GA (Genetic Algorithm), ACO (Ant ColonyOptimization) Method, and Plant Intelligence Behaviour optimization algorithm to measure the efficiency of these approaches based on the factors or metrics like CPU Time, Optimality, Accuracy, and Mean Best Standard Deviation. Methods: In this research work, a mixed-method approach is used. A literature review is performed based on the existing optimization algorithms. On the other hand, an experiment is conducted by using ten different benchmark functions with the current optimization algorithms like PSO algorithm, ACO algorithm, GA, and PIBO to measure their efficiency based on the four different factors like CPU Time, Optimality, Accuracy, Mean Best Standard Deviation. This tells us which optimization algorithms perform better. Results: The experiment findings are represented within this section. Using the standard functions on the suggested method and other methods, the various metrics like CPU Time, Optimality, Accuracy, and Mean Best Standard Deviation are considered, and the results are tabulated. Graphs are made using the data obtained. Analysis and Discussion: The research questions are addressed based on the experiment's results that have been conducted. Conclusion: We finally conclude the research by analyzing the existing optimization methods and the algorithms' performance. The PIBO performs much better and can be depicted from the results of the optimal metrics, best mean, standard deviation, and accuracy, and has a significant drawback of CPU Time where its time taken is much higher when compared to the PSO algorithm and almost close to GA and performs much better than ACO algorithm.
|
56 |
Ant Colony Optimization Technique to Solve Min-Max MultiDepot Vehicle Routing ProblemVenkata Narasimha, Koushik Srinath January 2011 (has links)
No description available.
|
57 |
Dynamic Behavior Analysis of Membrane-Inspired Evolutionary AlgorithmsZhang, G., Cheng, J.X., Gheorghe, Marian January 2014 (has links)
No / A membrane-inspired evolutionary algorithm (MIEA) is a successful instance of a model linking membrane computing and evolutionary algorithms. This paper proposes the analysis of dynamic behaviors of MIEAs by introducing a set of population diversity and convergence measures. This is the first attempt to obtain additional insights into the search capabilities of MIEAs. The analysis is performed on the MIEA, QEPS (a quantum-inspired evolutionary algorithm based on membrane computing), and its counterpart algorithm, QIEA (a quantum-inspired evolutionary algorithm), using a comparative approach in an experimental context to better understand their characteristics and performances. Also the relationship between these measures and fitness is analyzed by presenting a tendency correlation coefficient to evaluate the importance of various population and convergence measures, which is beneficial to further improvements of MIEAs. Results show that QEPS can achieve better balance between convergence and diversity than QIEA, which indicates QEPS has a stronger capacity of balancing exploration and exploitation than QIEA in order to prevent premature convergence that might occur. Experiments utilizing knapsack problems support the above made statement.
|
58 |
Optimalizační problémy při (max,min.)-lineárních omezeních a některé související úlohy / Optimization Problems under (max; min) - Linear Constraint and Some Related TopicsGad, Mahmoud Attya Mohamed January 2015 (has links)
Title: Optimization Problems under (max, min)-Linear Constraints and Some Related Topics. Author: Mahmoud Gad Department/Institue: Department of Probability and Mathematical Statis- tics Supervisor of the doctoral thesis: 1. Prof. RNDr. Karel Zimmermann,DrSc 2. Prof. Dr. Assem Tharwat, Cairo University, Egypt Abstract: Problems on algebraic structures, in which pairs of operations such as (max, +) or (max, min) replace addition and multiplication of the classical linear algebra have appeared in the literature approximately since the sixties of the last century. The first publications on these algebraic structures ap- peared by Shimbel [37] who applied these ideas to communication networks, Cunninghame-Green [12, 13], Vorobjov [40] and Gidffer [18] applied these alge- braic structures to problems of machine-time scheduling. A systematic theory of such algebraic structures was published probable for the first time in [14]. In recently appeared book [4] the readers can find latest results concerning theory and algorithms for (max, +)-linear systems of equations and inequalities. Since operation max replacing addition in no more a group, but a semigroup oppera- tion, it is a substantial difference between solving systems with variables on one side and systems with variables occuring on both sides of the equations....
|
59 |
Scalable Parallel Machine Learning on High Performance Computing Systems–Clustering and Reinforcement LearningWeijian Zheng (14226626) 08 December 2022 (has links)
<p>High-performance computing (HPC) and machine learning (ML) have been widely adopted by both academia and industries to address enormous data problems at extreme scales. While research has reported on the interactions of HPC and ML, achieving high performance and scalability for parallel and distributed ML algorithms is still a challenging task. This dissertation first summarizes the major challenges for applying HPC to ML applications: 1) poor performance and scalability, 2) loss of the convergence rate, 3) lower quality of the trained model, and 4) a lack of performance optimization techniques designed for specific applications. Researchers can address the four challenges in new ML applications. This dissertation shows how to solve them for two specific applications: 1) a clustering algorithm and 2) graph optimization algorithms that use reinforcement learning (RL).</p>
<p>As to the clustering algorithm, we first propose an algorithm called the simulated-annealing clustering algorithm. By combining a blocked data layout and asynchronous local optimization within each thread, the simulated-annealing enhanced clustering algorithm has a convergence rate that is comparable to the K-means algorithm but with much higher performance. Experiments with synthetic and real-world datasets show that the simulated-annealing enhanced clustering algorithm is significantly faster than the MPI K-means library using up to 1024 cores. However, the optimization costs (Sum of Square Error (SSE)) of the simulated-annealing enhanced clustering algorithm became higher than the original costs. To tackle this problem, we devise a new algorithm called the full-step feel-the-way clustering algorithm. In the full-step feel-the-way algorithm, there are L local steps within each block of data points. We use the first local step’s results to compute accurate global optimization costs. Our results show that the full-step algorithm can significantly reduce the global number of iterations needed to converge while obtaining low SSE costs. However, the time spent on the local steps is greater than the benefits of the saved iterations. To improve this performance, we next optimize the local step time by incorporating a sampling-based method called reassignment-history-aware sampling. Extensive experiments with various synthetic and real world datasets (e.g., MNIST, CIFAR-10, ENRON, and PLACES-2) show that our parallel algorithms can outperform the fastest open-source MPI K-means implementation by up to 110% on 4,096 CPU cores with comparable SSE costs.</p>
<p>Our evaluations of the sampling-based feel-the-way algorithm establish the effectiveness of the local optimization strategy, the blocked data layout, and the sampling methods for addressing the challenges of applying HPC to ML applications. To explore more parallel strategies and optimization techniques, we focus on a more complex application: graph optimization problems using reinforcement learning (RL). RL has proved successful for automatically learning good heuristics to solve graph optimization problems. However, the existing RL systems either do not support graph RL environments or do not support multiple or many GPUs in a distributed setting. This has compromised RL’s ability to solve large scale graph optimization problems due to the lack of parallelization and high scalability. To address the challenges of parallelization and scalability, we develop OpenGraphGym-MG, a high performance distributed-GPU RL framework for solving graph optimization problems. OpenGraphGym-MG focuses on a class of computationally demanding RL problems in which both the RL environment and the policy model are highly computation intensive. In this work, we distribute large-scale graphs across distributed GPUs and use spatial parallelism and data parallelism to achieve scalable performance. We compare and analyze the performance of spatial and data parallelism and highlight their differences. To support graph neural network (GNN) layers that take data samples partitioned across distributed GPUs as input, we design new parallel mathematical kernels to perform operations on distributed 3D sparse and 3D dense tensors. To handle costly RL environments, we design new parallel graph environments to scale up all RL-environment-related operations. By combining the scalable GNN layers with the scalable RL environment, we are able to develop high performance OpenGraphGym-MG training and inference algorithms in parallel.</p>
<p>To summarize, after proposing the major challenges for applying HPC to ML applications, this thesis explores several parallel strategies and performance optimization techniques using two ML applications. Specifically, we propose a local optimization strategy, a blocked data layout, and sampling methods for accelerating the clustering algorithm, and we create a spatial parallelism strategy, a parallel graph environment, agent, and policy model, and an optimized replay buffer, and multi-node selection strategy for solving large optimization problems over graphs. Our evaluations prove the effectiveness of these strategies and demonstrate that our accelerations can significantly outperform the state-of-the-art ML libraries and frameworks without loss of quality in trained models.</p>
|
60 |
Learning during search / Apprendre durant la recherche combinatoireArbelaez Rodriguez, Alejandro 31 May 2011 (has links)
La recherche autonome est un nouveau domaine d'intérêt de la programmation par contraintes, motivé par l'importance reconnue de l'utilisation de l'apprentissage automatique pour le problème de sélection de l'algorithme le plus approprié pour une instance donnée, avec une variété d'applications, par exemple: Planification, Configuration d'horaires, etc. En général, la recherche autonome a pour but le développement d'outils automatiques pour améliorer la performance d'algorithmes de recherche, e.g., trouver la meilleure configuration des paramètres pour un algorithme de résolution d'un problème combinatoire. Cette thèse présente l'étude de trois points de vue pour l'automatisation de la résolution de problèmes combinatoires; en particulier, les problèmes de satisfaction de contraintes, les problèmes d'optimisation de combinatoire, et les problèmes de satisfiabilité (SAT).Tout d'abord, nous présentons domFD, une nouvelle heuristique pour le choix de variable, dont l'objectif est de calculer une forme simplifiée de dépendance fonctionnelle, appelée dépendance-relaxée. Ces dépendances-relaxées sont utilisées pour guider l'algorithme de recherche à chaque point de décision.Ensuite, nous révisons la méthode traditionnelle pour construire un portefeuille d'algorithmes pour le problème de la prédiction de la structure des protéines. Nous proposons un nouveau paradigme de recherche-perpétuelle dont l'objectif est de permettre à l'utilisateur d'obtenir la meilleure performance de son moteur de résolution de contraintes. La recherche-perpétuelle utilise deux modes opératoires: le mode d'exploitation utilise le modèle en cours pour solutionner les instances de l'utilisateur; le mode d'exploration réutilise ces instances pour s'entraîner et améliorer la qualité d'un modèle d'heuristiques par le biais de l'apprentissage automatique. Cette deuxième phase est exécutée quand l'unit\'e de calcul est disponible (idle-time). Finalement, la dernière partie de cette thèse considère l'ajout de la coopération au cours d'exécution d'algorithmes de recherche locale parallèle. De cette façon, on montre que si on partage la meilleure configuration de chaque algorithme dans un portefeuille parallèle, la performance globale peut être considérablement amélioré. / Autonomous Search is a new emerging area in Constraint Programming, motivated by the demonstrated importance of the application of Machine Learning techniques to the Algorithm Selection Problem, and with potential applications ranging from planning and configuring to scheduling. This area aims at developing automatic tools to improve the performance of search algorithms to solve combinatorial problems, e.g., selecting the best parameter settings for a constraint solver to solve a particular problem instance. In this thesis, we study three different points of view to automatically solve combinatorial problems; in particular Constraint Satisfaction, Constraint Optimization, and SAT problems.First, we present domFD, a new Variable Selection Heuristic whose objective is to heuristically compute a simplified form of functional dependencies called weak dependencies. These weak dependencies are then used to guide the search at each decision point. Second, we study the Algorithm Selection Problem from two different angles. On the one hand, we review a traditional portfolio algorithm to learn offline a heuristics model for the Protein Structure Prediction Problem. On the other hand, we present the Continuous Search paradigm, whose objective is to allow any user to eventually get his constraint solver to achieve a top performance on their problems. Continuous Search comes in two modes: the functioning mode solves the user's problem instances using the current heuristics model; the exploration mode reuses these instances to training and improve the heuristics model through Machine Learning during the computer idle time. Finally, the last part of the thesis, considers the question of adding a knowledge-sharing layer to current portfolio-based parallel local search solvers for SAT. We show that by sharing the best configuration of each algorithm in the parallel portfolio on regular basis and aggregating this information in special ways, the overall performance can be greatly improved.
|
Page generated in 0.1179 seconds