Spelling suggestions: "subject:"cachine 1earning algorithms"" "subject:"cachine 1earning a.lgorithms""
1 |
Solution path algorithms : an efficient model selection approach /Wang, Gang. January 2007 (has links)
Thesis (Ph.D.)--Hong Kong University of Science and Technology, 2007. / Includes bibliographical references (leaves 102-108). Also available in electronic version.
|
2 |
Dynamic planning and scheduling in manufacturing systems with machine learning approachesYang, Donghai. January 2008 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2009. / Includes bibliographical references. Also available in print.
|
3 |
Dynamic planning and scheduling in manufacturing systems with machine learning approaches /Yang, Donghai. January 2008 (has links)
Thesis (Ph. D.)--University of Hong Kong, 2009. / Includes bibliographical references. Also available online.
|
4 |
Meta-learning strategies, implementations, and evaluations for algorithm selection /Köpf, Christian Rudolf. January 1900 (has links)
Thesis (doctorate)--Universität Ulm, 2005. / Includes bibliographical references (p. 227-248).
|
5 |
Algorithmic stability and ensemble-based learning /Kutin, Samuel. January 2002 (has links)
Thesis (Ph. D.)--University of Chicago, Dept. of Computer Science, June 2002. / Includes bibliographical references. Also available on the Internet.
|
6 |
Hierarchical multiple classifier learning system /Chou, Yu-Yu. January 1999 (has links)
Thesis (Ph. D.)--University of Washington, 1999. / Vita. Includes bibliographical references (leaves 92-98).
|
7 |
Machine learning-based human observer analysis of video sequencesAl-Raisi, Seema F. A. R. January 2017 (has links)
The research contributes to the field of video analysis by proposing novel approaches to automatically generating human observer performance patterns that can be effectively used in advancing the modern video analytic and forensic algorithms. Eye tracker and eye movement analysis technology are employed in medical research, psychology, cognitive science and advertising. The data collected on human eye movement from the eye tracker can be analyzed using the machine and statistical learning approaches. Therefore, the study attempts to understand the visual attention pattern of people when observing a captured CCTV footage. It intends to prove whether the eye gaze of the observer which determines their behaviour is dependent on the given instructions or the knowledge they learn from the surveillance task. The research attempts to understand whether the attention of the observer on human objects is differently identified and tracked considering the different areas of the body of the tracked object. It attempts to know whether pattern analysis and machine learning can effectively replace the current conceptual and statistical approaches to the analysis of eye-tracking data captured within a CCTV surveillance task. A pilot study was employed that took around 30 minutes for each participant. It involved observing 13 different pre-recorded CCTV clips of public space. The participants are provided with a clear written description of the targets they should find in each video. The study included a total of 24 participants with varying levels of experience in analyzing CCTV video. A Tobii eye tracking system was employed to record the eye movements of the participants. The data captured by the eye tracking sensor is analyzed using statistical data analysis approaches like SPSS and machine learning algorithms using WEKA. The research concluded the existence of differences in behavioural patterns which could be used to classify participants of study is appropriate machine learning algorithms are employed. The research conducted on video analytics was perceived to be limited to few iii projects where the human object being observed was viewed as one object, and hence the detailed analysis of human observer attention pattern based on human body part articulation has not been investigated. All previous attempts in human observer visual attention pattern analysis on CCTV video analytics and forensics either used conceptual or statistical approaches. These methods were limited with regards to making predictions and the detection of hidden patterns. A novel approach to articulating human objects to be identified and tracked in a visual surveillance task led to constrained results, which demanded the use of advanced machine learning algorithms for classification of participants The research conducted within the context of this thesis resulted in several practical data collection and analysis challenges during formal CCTV operator based surveillance tasks. These made it difficult to obtain the appropriate cooperation from the expert operators of CCTV for data collection. Therefore, if expert operators were employed in the study rather than novice operator, a more discriminative and accurate classification would have been achieved. Machine learning approaches like ensemble learning and tree based algorithms can be applied in cases where a more detailed analysis of the human behaviour is needed. Traditional machine learning approaches are challenged by recent advances in the field of convolutional neural networks and deep learning. Therefore, future research can replace the traditional machine learning approaches employed in this study, with convolutional neural networks. The current research was limited to 13 different videos with different descriptions given to the participants for identifying and tracking different individuals. The research can be expanded to include any complicated demands with regards to changes in the analysis process.
|
8 |
On the simulation and design of manycore CMPsThompson, Christopher Callum January 2015 (has links)
The progression of Moore’s Law has resulted in both embedded and performance computing systems which use an ever increasing number of processing cores integrated in a single chip. Commercial systems are now available which provide hundreds of cores, and academics have proposed architectures for up to 1024 cores. Embedded multicores are increasingly popular as it is easier to guarantee hard-realtime constraints using individual cores dedicated for tasks, than to use traditional time-multiplexed processing. However, finding the optimal hardware configuration to meet these requirements at minimum cost requires extensive trial and error approaches to investigate the design space. This thesis tackles the problems encountered in the design of these large scale multicore systems by first addressing the problem of fast, detailed micro-architectural simulation. Initially addressing embedded systems, this work exploits the lack of hardware cache-coherence support in many deeply embedded systems to increase the available parallelism in the simulation. Then, through partitioning the NoC and using packet counting and cycle skipping reduces the amount of computation required to accurately model the NoC interconnect. In combination, this enables simulation speeds significantly higher than the state of the art, while maintaining less error, when compared to real hardware, than any similar simulator. Simulation speeds reach up to 370MIPS (Million (target) Instructions Per Second), or 110MHz, which is better than typical FPGA prototypes, and approaching final ASIC production speeds. This is achieved while maintaining an error of only 2.1%, significantly lower than other similar simulators. The thesis continues by scaling the simulator past large embedded systems up to 64-1024 core processors, adding support for coherent architectures using the same packet counting techniques along with low overhead context switching to enable the simulation of such large systems with stricter synchronisation requirements. The new interconnect model was partitioned to enable parallel simulation to further improve simulation speeds in a manner which did not sacrifice any accuracy. These innovations were leveraged to investigate significant novel energy saving optimisations to the coherency protocol, processor ISA, and processor micro-architecture. By introducing a new instruction, with the name wait-on-address, the energy spent during spin-wait style synchronisation events can be significantly reduced. This functions by putting the core into a low-power idle state while the cache line of the indicated address is monitored for coherency action. Upon an update or invalidation (or traditional timer or external interrupts) the core will resume execution, but the active energy of running the core pipeline and repeatedly accessing the data and instruction caches is effectively reduced to static idle power. The thesis also shows that existing combined software-hardware schemes to track data regions which do not require coherency can adequately address the directory-associativity problem, and introduces a new coherency sharer encoding which reduces the energy consumed by sharer invalidations when sharers are grouped closely together, such as would be the case with a system running many tasks with a small degree of parallelism in each. The research concludes by using the extremely fast simulation speeds developed to produce a large set of training data, collecting various runtime and energy statistics for a wide range of embedded applications on a huge diverse range of potential MPSoC designs. This data was used to train a series of machine learning based models which were then evaluated on their capacity to predict performance characteristics of unseen workload combinations across the explored MPSoC design space, using only two sample simulations, with promising results from some of the machine learning techniques. The models were then used to produce a ranking of predicted performance across the design space, and on average Random Forest was able to predict the best design within 89% of the runtime performance of the actual best tested design, and better than 93% of the alternative design space. When predicting for a weighted metric of energy, delay and area, Random Forest on average produced results within 93% of the optimum result. In summary this thesis improves upon the state of the art for cycle accurate multicore simulation, introduces novel energy saving changes the the ISA and microarchitecture of future multicore processors, and demonstrates the viability of machine learning techniques to significantly accelerate the design space exploration required to bring a new manycore design to market.
|
9 |
Machine Learning in Logistics: Machine Learning Algorithms : Data Preprocessing and Machine Learning AlgorithmsAndersson, Viktor January 2017 (has links)
Data Ductus is a Swedish IT-consultant company, their customer base ranging from small startups to large scale cooperations. The company has steadily grown since the 80s and has established offices in both Sweden and the US. With the help of machine learning, this project will present a possible solution to the errors caused by the human factor in the logistic business.A way of preprocessing data before applying it to a machine learning algorithm, as well as a couple of algorithms to use will be presented. / Data Ductus är ett svenskt IT-konsultbolag, deras kundbas sträcker sig från små startups till stora redan etablerade företag. Företaget har stadigt växt sedan 80-talet och har etablerat kontor både i Sverige och i USA. Med hjälp av maskininlärning kommer detta projket att presentera en möjlig lösning på de fel som kan uppstå inom logistikverksamheten, orsakade av den mänskliga faktorn.Ett sätt att förbehandla data innan den tillämpas på en maskininlärning algoritm, liksom ett par algoritmer för användning kommer att presenteras.
|
10 |
Machine Learning Algorithms for Multi-objective Design Optimization of Switched Reluctance Motors (SRMs)Omar, Mohamed January 2024 (has links)
Switched Reluctance Motors (SRMs) are gaining recognition due to their robust design, cost-effectiveness, fault tolerance, and reliable high-speed performance, positioning them as promising alternatives to traditional electric motors. However, SRMs face high torque ripples, vibration, acoustic noise, and nonlinear modeling complexities. Through careful geometry design optimization, these drawbacks can be mitigated. Design optimization for SRMs is a multi-objective and nonlinear problem that requires an accurate finite element analysis (FEA) model to relate designable parameters to output objectives. The geometric design process follows a multi-stage and iterative approach, leading to prohibitive computational time until the optimal design is reached.
Machine learning algorithms (MLAs) have recently acquired attention in electric machine design. This study introduces an extensive analysis of various MLAs applied to SRM modeling and design. Additionally, it presents a robust framework for a comprehensive evaluation of these MLAs, facilitating the selection of the optimal machine learning topology for SRM design. Existing research on the geometry optimization of SRMs using MLAs has focused only on the machine’s static characteristics.
This thesis introduces an advanced optimization method utilizing an MLA to act as a surrogate model for both static and dynamic characteristics of the SRM. The dynamic model incorporates conduction angles optimization to enhance the torque profile. The proposed MLA is applied to map out the SRM geometrical parameters, stator and rotor pole arc angles and their dynamic performance metrics, such as average torque and torque ripples. The optimal design improves the average torque and significantly reduces the torque ripples.
Radial forces constitute a critical objective that should be considered alongside average torque, efficiency, and torque ripple in the design optimization of SRMs. Accurate modeling of radial forces is a prerequisite for optimizing motor geometry to mitigate their adverse effects on vibrations and acoustic noise. This work presents an MLA-based surrogate model for the most influential radial force harmonic components, facilitating the integration of radial force reduction into a multi-objective optimization framework.
The proposed optimization framework employs two MLA-based surrogate models: the first correlates SRM pole arc angles with average torque and torque ripples, while the second models the most significant radial force harmonics. A genetic algorithm leverages these surrogate models to predict new geometrical parameters that enhance the SRM's torque profile and reduce radial forces. The optimization framework significantly reduced torque ripples and radial forces while slightly increasing average torque. The optimal design candidates were verified using FEA and MATLAB simulations, confirming the effectiveness of the proposed method, which offers significant computational time savings compared to traditional FEA techniques. / Thesis / Doctor of Philosophy (PhD)
|
Page generated in 0.0926 seconds