• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 7
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 15
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Algorithms for optimising control

Kambhampati, C. January 1988 (has links)
No description available.
2

Anionic polymarisation of caprolactam : an approach to optimising thr polymerisation condition to be used in the jetting process

Khodabakhshi, Khosrow January 2011 (has links)
The main aim of this project was to investigate the possibility of manufacturing 3D parts of polyamide (nylon or PA) 6 by inkjetting its monomer caprolactam (CL). The principle of this process was similar to the other rapid prototype (RP) and rapid manufacturing (RM) processes in which a 3D part is manufactured by layer on layer deposition of material. PA6 was used as the thermoplastic polymer in this work because of its good properties and also because PA6 can be produced by heating its monomer (i.e. plus catalyst and activator) in a short time. Two polymerisation mixtures of CL-catalyst (mixture A) and CL-activator (mixture B) are intended to be jetted separately using conventional jetting heads and polymerise shortly after heating. Anionic polymerisation of CL (APCL) was investigated in the bulk and on a smaller scale. Sodium caprolactamate (CLNa and C10) and caprolactam magnesium bromide (CLMgBr) were used as catalysts and N-acetylcaprolactam (ACL) and a di-functional activator (C20) were used as activators. The influence of polymerisation conditions was investigated and optimised. These were catalyst-activator concentration, polymerisation temperature and the influence of the polymerisation atmosphere. The physical properties (monomer conversion, crystallinity, and viscosity average molecular weight) of PA6 samples produced using each catalyst-activator combinations were measured and compared. Small scale polymerisation was carried out using a hotplate, by hot stage microscopy and using differential scanning calorimetry (DSC). The influence of heating strategy on small scale polymerisation was studied using DSC. The polymerisation mixture compositions were characterised using rheometry, Fourier transform infrared spectroscopy (FTIR), scanning electron microscopy (SEM), and optical microscopy to investigate their suitability in jetting for using the available jetting heads. It was shown that the combination of CLMgBr-ACL resulted in fast polymerisation which was not sensitive to moisture. The C10-C20 combination resulted in fast polymerisation with the best properties in a protected environment (nitrogen); however, the polymerisation was affected by moisture in air and the properties of polymer produced and rate of polymerisation decreased in air. Polymers produced using CLNa-ACL had the poorest properties and polymerisation did not occur in air. Material characterisation showed that micro-crystals of CLMgBr existed in CLMgBr-CL mixture at the jetting temperature (80oC) which were too large to be jetted. However, the mixture of C10 in CL could be partially jetted. The activator mixtures had similar properties to CL and were easily jetted. Drop on drop polymerisation was carried out by dripping droplets of mixtures A and B (at 80oC) on top of each other on a hotplate at the polymerisation temperature. Small scale polymerisation in a DSC showed that the monomer conversion increased with increase in polymerisation temperature from 140oC to 180oC and decreased from 180oC to 200oC. The crystallinity of the polymer produced in the DSC decreased with increase in polymerisation temperature. Hot stage microscopy produced evidence for simultaneous polymerisation and crystallisation processes on heating. Small scale polymerisation in an oven and analysed by DSC showed that increasing catalystactivator concentration resulted in increasing monomer conversion and decrease in crystallinity. Monomer conversion also increased with increase in polymerisation temperature and polymerisation time. Comparison between small scale and bulk polymerisations shows a good agreement between the two polymerisation rates. This shows that the polymerisation mechanism did not change significantly when the quantity of materials was reduced to less than 20mg. Finally, the polymerisation was carried out in a DSC after jetting C10-CL and C20-CL mixtures into a DSC pan using a jetting system, which was made in another work.
3

Reducing the cost of heuristic generation with machine learning

Ogilvie, William Fraser January 2018 (has links)
The space of compile-time transformations and or run-time options which can improve the performance of a given code is usually so large as to be virtually impossible to search in any practical time-frame. Thus, heuristics are leveraged which can suggest good but not necessarily best configurations. Unfortunately, since such heuristics are tightly coupled to processor architecture performance is not portable; heuristics must be tuned, traditionally manually, for each device in turn. This is extremely laborious and the result is often outdated heuristics and less effective optimisation. Ideally, to keep up with changes in hardware and run-time environments a fast and automated method to generate heuristics is needed. Recent works have shown that machine learning can be used to produce mathematical models or rules in their place, which is automated but not necessarily fast. This thesis proposes the use of active machine learning, sequential analysis, and active feature acquisition to accelerate the training process in an automatic way, thereby tackling this timely and substantive issue. First, a demonstration of the efficiency of active learning over the previously standard supervised machine learning technique is presented in the form of an ensemble algorithm. This algorithm learns a model capable of predicting the best processing device in a heterogeneous system to use per workload size, per kernel. Active machine learning is a methodology which is sensitive to the cost of training; specifically, it is able to reduce the time taken to construct a model by predicting how much is expected to be learnt from each new training instance and then only choosing to learn from those most profitable examples. The exemplar heuristic is constructed on average 4x faster than a baseline approach, whilst maintaining comparable quality. Next, a combination of active learning and sequential analysis is presented which reduces both the number of samples per training example as well as the number of training examples overall. This allows for the creation of models based on noisy information, sacrificing accuracy per training instance for speed, without having a significant affect on the quality of the final product. In particular, the runtime of high-performance compute kernels is predicted from code transformations one may want to apply using a heuristic which was generated up to 26x faster than with active learning alone. Finally, preliminary work demonstrates that an automated system can be created which optimises both the number of training examples as well as which features to select during training to further substantially accelerate learning, in cases where each feature value that is revealed comes at some cost.
4

Samkörning av databaser-Är lagen ett hinder?

Ankarberg, Alexander January 2006 (has links)
<p>Title:Comparison of databases – Is the law an obstacle?</p><p>Authors:Alexander Ankarberg, Applied Information Science.</p><p>Tutors:Lars- Eric Ljung</p><p>Problem: Cross running databases is getting more and more significant during the development of the information flow. There are huge benefits if we start to use the technique that already exists. The law is today an obstacle, so what would happen if the law wasn’t so stern. My question is:” why don’t we cross run databases more efficient between parts of institutions”</p><p>Aim:The purpose of this essay is to evaluate why institutions does not cross run databases and start a discussion. There are possibilities that we today does not use. One aim is also to find solutions so that we can start to use the techniques. The essay will explain the fundamentals and discuss both the advantages and the disadvantages in depth.</p><p>Method:The author has approached the problem from two ways. From induction and deduction which combined is abduction. The author hopes that this results in as many points of angles as possible. And the answers will be as complete as possible. The essay also includes an inquiry which is based on interview with ordinary people.</p><p>Conclusions:The law is not up to date nor made for today’s technique. It is in some ways an obstacle for a more efficient system and it could save enormous amounts of money for both the government and common man. There is hope though, and small revolutions happen every day. There is also ways to go around the law and make things possible and make the system more efficient. That is with agreement from the person that the information is about. There is also one possibility with safety classes, to put a number on information.</p>
5

Samkörning av databaser-Är lagen ett hinder?

Ankarberg, Alexander January 2006 (has links)
Title:Comparison of databases – Is the law an obstacle? Authors:Alexander Ankarberg, Applied Information Science. Tutors:Lars- Eric Ljung Problem: Cross running databases is getting more and more significant during the development of the information flow. There are huge benefits if we start to use the technique that already exists. The law is today an obstacle, so what would happen if the law wasn’t so stern. My question is:” why don’t we cross run databases more efficient between parts of institutions” Aim:The purpose of this essay is to evaluate why institutions does not cross run databases and start a discussion. There are possibilities that we today does not use. One aim is also to find solutions so that we can start to use the techniques. The essay will explain the fundamentals and discuss both the advantages and the disadvantages in depth. Method:The author has approached the problem from two ways. From induction and deduction which combined is abduction. The author hopes that this results in as many points of angles as possible. And the answers will be as complete as possible. The essay also includes an inquiry which is based on interview with ordinary people. Conclusions:The law is not up to date nor made for today’s technique. It is in some ways an obstacle for a more efficient system and it could save enormous amounts of money for both the government and common man. There is hope though, and small revolutions happen every day. There is also ways to go around the law and make things possible and make the system more efficient. That is with agreement from the person that the information is about. There is also one possibility with safety classes, to put a number on information.
6

Optimalizace manipulační techniky v podniku Nestlé Česko s.r.o., závod ZORA Olomouc / Optimising the material-handling equipment at Nestlé Česko s.r.o., plant ZORA Olomouc

Kovář, Jiří January 2015 (has links)
This thesis discusses the optimal way of material-handling equipment replacement at Nestlé Česko s.r.o., plant ZORA Olomouc. The theoretical part describes the issue of warehousing in general and focuses on the material-handling equipment and vehicles. The following analytical part focuses specifically on the company Nestlé Česko s.r.o., particularly the plant ZORA Olomouc with the foremost aim of analysing and optimising the current material-handling equipment.
7

Optimising remote collection of odontological data.

Shakeri, Alireza January 2021 (has links)
1.1 Problem statement  This study looks to examine if and how patient/dentist contact can be reduced through remote diagnosis. The goal of the study is to formulate an understanding of what type of data needs to be collected and how to optimize the collection of that data (through digital platforms) to put together an odontological diagnosis that is as accurate as possible.  In other words, can medical diagnosis of the oral cavity be done correctly remotely? How is the process of remote odontological data collection optimized through platform design (interface and functionality)?  1.2 Methods  To collect remote data pertaining to the oral cavity and its health status a system is designed composed of three separate parts that interact. A database for the permanent storage of user data, a webpage for the collection of user data through user input and a backend system that acts as conveyer of information between the users and the database. Once the system is deployed user data is collected and interpreted, the quality of the data is then assessed by qualified dentists and the system modified based on the feedback from users and dentists. After the system has been modified it is redeployed, new data is collected, and its quality assessed and compared to the data previously collected. These modifications can be in the form of minor changes made to small parts of the system or major changes involving the entire system. Although this sort of feedback loop enhancement can be performed repeatedly during a long period of time, the goal is to complete two major iterations and a series of minor changes as feedback is obtained. User feedback will be received primarily through social media as the system does not allow users to express their opinions in any direct way. This is simplified by the fact that most users will be recruited through social media platforms.  1.3 Results  Although initially the concern was that users would have issues taking adequate images/videos of the oral cavity and its oral pathologies if present, this concern was quickly dismissed. The main issue users encountered were those related to navigation of the platform resulting in users submitting incomplete data. Once changes were applied to simplify navigation the results changed drastically, and majority of the data collected was complete. As data was collected it became clear that many different types of cases could be correctly diagnosed remotely, however, some cases inevitably will require a clinical examination to diagnose due to various factors such as the need for radiographs and/or dental probing. Nevertheless, the changes made to the platform over the iterations did help to optimise data collection significantly.
8

The Omega Function : A Comparison Between Optimized Portfolios

Salih, Ali January 2011 (has links)
The traditional way to analyze stocks and portfolios within the area of finance have been restricted to Sharpe and Markovitz. The Omega function and its properties enlighten the field of finance and differs from the traditional ways when it comes to the volatility of the stocks. The Omega function, the Sharpe performance criteria and mean-variance model by Markovitz will be used. All calculations are done in Matlab and the data sheets are excel tables. The aim of this thesis is to investigate the nordic small cap market by using the Omega function, Sharpe performance criteria and the mean variance model by Markovitz. In order to to see how the purposed methods differs.
9

Optimising mixed-ability grouping for effective instruction at the junior secondary school level in Botswana

Mafa, Onias 11 1900 (has links)
The debate on how students of different abilities should be organised and taught is probably as old as the introduction of formal schooling. It has generated a lot of debate in the past and continues to do so in the present millennium. This debate has invariably divided the world of educational research into two distinct camps. On one hand are proponents of ability grouping who claim that this grouping approach creates homogeneity which make it possible to tailor teaching to individual needs and thus raise achievement. On the other hand, are the exponents of mixed-ability grouping, who argue that ability grouping denies equality of educational opportunities to many young people, limiting their life chances and increasing social segregation. However, there is an emerging trend which posits that teachers should view students' mixed-abilities as an asset, which if properly exploited can result in effective instruction for the benefit of all students regardless of their many individual differences. This emanates from the realisation that there are different types of intelligences, and that it is not always possible for an individual student to posses all the types of intelligences. Therefore, students from diverse backgrounds, endowed with multiple intelligences, can help one another understand the content better as they will perceive the content from their diverse experiential backgrounds. This qualitative study concerned itself with investigating how mixed-ability grouping can be optimised for effective instruction at the junior secondary school level in Botswana. The study made use of literature study, focus groups, follow-up interviews and lesson observations. Major findings were that teachers are not optimising mixed-ability grouping for effective instruction. Instead, teachers have problems in teaching mixed-ability classes, with most of their teaching being teacher-centred. However, teachers can optimise mixed-ability grouping through the use of student-centred instructional strategies such as cooperative learning, small-group instruction, peer teaching and student research. Gifted students could be catered for through curriculum compaction, enrichment and extension work, while mentally challenged students could be offered remedial work. These cited teaching strategies are differential and they make use of the diverse abilities found in mixed-ability classes. / Educational Studies / D.Ed. (Didactics)
10

Cilindrinių kevalų statistinis modeliavimas ir analizė / Cylindrical shells statistical modeling and analysis

Klova, Egidijus 03 June 2005 (has links)
The made-up software let to construct the cylindrical shell with composition in laminate by chosen test (reinforcement’s corner, number of shells), also it offer to simulate a reliability of construction and to optimise it by chosen in statistic parameters. Shell’s parameters can be evaluated like episodic variables, which are modulated with Monte - Carlo method. There is given opportunity to evaluate construction’s reliability: of the supposition that distribution of strain at shell is known, construction can be optimised when we are minimizing mass. If we want to show construction’s distribution of limitary state in dotted chart, we have to model strain’s value in every Monte – Carlo step. It is computed factors, which have influence for shell’s strain in construction’s stability. In such succession of operations is controlled the reliability of model’s evaluation. There are analysed parameters, which have the biggest influence for strain at shell. Studied the minimal shell’s mass fluctuation in dependence with dispersion and also parameter’s, of construction strain, overtop probability.

Page generated in 0.0448 seconds