11 |
An experimental study on high speed milling and a predictive force modelEkanayake, Risheeka Ayomi, Mechanical & Manufacturing Engineering, Faculty of Engineering, UNSW January 2010 (has links)
This thesis presents the research work carried out in an experimental study on High Speed Milling and a predictive force model. The Oxley??s machining theory [36] that can be considered a purely theoretical approach, which has not yet been applied to the high speed milling process is used to model this process in order to predict the cutting forces. An experimental programme was carried out in order to study and understand the high speed milling process and to collect force data for machining of AISI 1020 plain carbon steel at speeds from 250 to 500m/min, feed rates 0.025 to 0.075mm/tooth and 0.5 and 0.8mm depths of cut, using three different tool configurations with different nose radii. The model developed by Young [5] using the Oxley??s machining theory, for conventional milling, was first applied to the high speed milling operation. The force predictions were satisfactory compared to the measured forces. Using this as the basis, a theoretical model was developed to predict the cutting forces in high speed milling. A smaller chip element was considered in applying the machining theory to satisfy the assumption of two dimensional deformation in the machining theory. Using the flow stress properties for plain carbon steels obtained by Oxley and his co-workers, the cutting force components: tangential, radial and vertical, were predicted with the new developed model for AISI 1020 steel for the same cutting conditions used in the experiment. The model was able to accurately predict the tangential force, while the other two components showed a good agreement with the experimental forces. Then the model was verified using two other materials namely, AISI 1045 plain carbon steel and AISI 4140 alloy steel. The alloy steel was used in both the states, virgin and hardened (heat treated) for the experiment. The comparison of predictions with experimental forces showed good results for these additional two materials. From the results obtained, it is concluded that the developed model can be used to predict the tangential cutting force accurately, while predicting the other force components with a favourable accuracy.
|
12 |
The Influence of Bitcoin on Ethereum Price PredictionsCaldegren, André January 2018 (has links)
Cryptocurrencies are a cryptography based technology, that has increased massively in popularity in recent years. These currencies are traded on markets that specialize in cryptocurrency trade. There, you can trade one cryptocurrency for another, or buy one with real world money. These markets are quite volatile, meaning that the price of most cryptocurrencies swing up and down a lot. The largest cryptocurrency is Bitcoin, but there is also more than 1500 smaller ones, that goes by the name alternative coins, or altcoins. This thesis will try to find out if it is possible to make accurate predictions about the future price of the altcoin Ethereum, and also see if Bitcoin may have some influence over the price of the selected altcoin. The predictions were made with the use of an artificial neural network, an LSTM network, that was trained on labeled data from 2017. The predictions were then made in intervals of one hour ahead, six hours ahead, and one day ahead through early 2018. The predictions showed that it is possible to make somewhat accurate predictions about the future. The predictions that were made one hour ahead were more accurate than both the six hours ahead predictions and the full day ahead predictions. By comparing the loss rates of the neural networks that were only trained on Ethereum, with the loss rates of the networks that trained on both Bitcoin and Ethereum, is was made clear that training on both cryptocurrencies did not improve the prediction accuracies.
|
13 |
Generalizations of the Diffie-Hellman protocol : exposition and implementationVan der Berg, J.S. 21 April 2008 (has links)
A generalisation of the Diffie-Hellman protocol is studied in this dissertation. In the generalisation polynomials are used to reduce the representation size of a public key and linear shift registers for more efficient computations. These changes are important for the implementation of the protocol in con- strained environments. The security of the Diffie-Hellman protocol and its generalisation is based on the same computations problems. Lastly three examples of the generalisation and their implementation are discussed. For two of the protocols, models are given to predict the execution time and it is determined how well these model predictions are. / Dissertation (MSc (Applied Mathematics))--University of Pretoria, 2007. / Mathematics and Applied Mathematics / MSc / unrestricted
|
14 |
A Comparison of Recurrent Neural Networks Models and Econometric Models for Stock Market Predictions / En Jämförelse mellan "Recurrent Neural Network" Modeller samt Ekonometriska Modeller för Aktiemarknads PrediktionerKeskitalo, Johan January 2020 (has links)
It is well known that the stock market is highly volatile, so stock price prediction is a very challenging task. However, in order to make a profit or to understand the equity market, many investors and researchers use various statistical, econometric, and neural network models to make the best stock price predictions possible. In this thesis the aim is to compare the predictability of two econometric models, the exponential moving average (EMA) and auto regressive integrated moving average (ARIMA) models, and two neural network models, a simple recurrent neural network (RNN) and the long short term memory model (LSTM) model. The comparison is primarily made using the Tesla company as the underlying stock. While using mean square error (MSE) as a measure of performance, the LSTM model consistently outperformed the other three models.
|
15 |
Slowly Moving Black Holes In Khrono-Metric ModelKovachik, Andrew January 2024 (has links)
I have developed a technique to solve for the khronon field in a space-time containing
a slowly moving black hole in the khrono-metric regime of Hořava Gravity. To develop
these solutions I first revisited the khronon field around static spherically symmetric
black holes and perturbed them by a small velocity. The equations of motions of the
perturbed field were identified along with the linearly dependent series expansions
at the boundary points. Using the boundary conditions and equations of motion
the khronon field was numerically solved throughout the space-time. These solutions
were used to calculate a sensitivity parameter which defines how the black hole mass
appears to be modified due to its velocity. It was found that the sensitivity parameters
are highly suppressed and black holes should appear similar to their general relativity
counterpart. / Thesis / Master of Science (MSc) / I have investigated slowly moving black holes in a theory of modified gravity. The
goal was to see whether the theory breaks down in modelling these black holes and
if not, is it possible to test the theory using these predictions. I ultimately found
that this theory can model the slowly moving black holes and would appear almost
indistinguishable from classically moving black holes. This means that slowly moving
black holes on their own will not provide a sufficient test of the theory.
|
16 |
Identifying tranquil environments and quantifying impactsWatts, Gregory R., Pheasant, Robert J. 10 October 2014 (has links)
No / The UK has recently recognized the importance of tranquil spaces in the National Planning Policy Framework. This policy framework places considerable emphasis on sustainable development with the aim of making planning more streamlined, localized and less restrictive. Specifically it states that planning policies and decisions should aim to "identify and protect areas of tranquillity which have remained relatively undisturbed by noise and are prized for their recreational and amenity value for this reason". This is considered by some (e.g. National Park Authorities) to go beyond merely identifying quiet areas based on relatively low levels of mainly transportation noise, as the concept of tranquillity implies additionally a consideration of visual intrusion of man-made structures and buildings into an Otherwise perceived natural landscape. In the first instance this paper reports on applying a method for predicting the perceived tranquillity of a place and using this approach to classify the level of tranquillity in existing areas. It then seeks to determine the impact of a new build, by taking the example of the construction of wind turbines in the countryside. For this purpose; noise level measurements, photographs and jury assessments of tranquillity at a medium sized land based wind turbine were made. It was then possible to calculate the decrement of noise levels and visual prominence with distance in order to determine the improvement of tranquillity rating with increasing range. The point at which tranquillity was restored in the environment allowed the calculation of the position of the footprint boundary. (C) 2014 Elsevier Ltd. All rights reserved. / yes
|
17 |
Performer mediation of anxiety : the role of effort regulation and strategy usage in high level sport performanceBellamy, Mark James Brian January 2000 (has links)
No description available.
|
18 |
A STUDY OF REAL TIME SEARCH IN FLOOD SCENES FROM UAV VIDEOS USING DEEP LEARNING TECHNIQUESGagandeep Singh Khanuja (7486115) 17 October 2019 (has links)
<div>Following a natural disaster, one of the most important facet that influence a persons chances of survival/being found out is the time with which they are rescued. Traditional means of search operations involving dogs, ground robots, humanitarian intervention; are time intensive and can be a major bottleneck in search operations. The main aim of these operations is to rescue victims without critical delay in the shortest time possible which can be realized in real-time by using UAVs. With advancements in computational devices and the ability to learn from complex data, deep learning can be leveraged in real time environment for purpose of search and rescue operations. This research aims to solve the traditional means of search operation using the concept of deep learning for real time object detection and Photogrammetry for precise geo-location mapping of the objects(person,car) in real time. In order to do so, various pre-trained algorithms like Mask-RCNN, SSD300, YOLOv3 and trained algorithms like YOLOv3 have been deployed with their results compared with means of addressing the search operation in</div><div>real time.</div><div><br></div>
|
19 |
Great Expectations: The Role of Implicit Current Intentions on Predictions of Future BehaviourWudarzewski, Amanda January 2011 (has links)
I present behavioural data contributing to existing research that (implicit) self-predictions are overly reliant on current intentions at the time of the decision (Koehler & Poon, 2006). Results are consistent with previous findings that self-predictions are often insensitive to translatability cues and overly influenced by desirability cues. We show that although participants typically benefit from a reminder, it is undervalued at the time of the decision (Experiment 1 & 3a) as participants are not willing to pay for a reminder service, unless it is offered free of charge (Experiment 2). Our findings also show that participants fail to incorporate temporal delay sufficiently in their opt-in decisions, even though temporal delay was found to be a significant predictor return behaviour (Experiments 1, 2 & 3b). Instead, decisions were found to be highly influenced by desirability factors (Experiments 1 & 2) which were not significant predictors of task completion. Finally, using a construal manipulation intended to induce participants to think about the decision options in either a concrete or abstract way influenced decisions (Experiment 3a), and subsequently influenced how much participants benefitted from the reminder in task completion (Experiment 3b).
|
20 |
Combination of results from gene-finding programsHammar, Cecilia January 1999 (has links)
<p>Gene-finding programs available over the Internet today are shown to be nothing more than guides to possible coding regions in the DNA. The programs often do incorrect predictions. The idea of combining a number of different gene-finding programs arised a couple of years ago. Murakami and Takagi (1998) published one of the first attempts to combine results from gene-finding programs built on different techniques (e.g. artificial neural networks and hidden Markov models). The simple combinations methods used by Murakami and Takagi (1998) indicated that the prediction accuracy could be improved by a combination of programs.</p><p>In this project artificial neural networks are used to combine the results of the three well-known gene-finding programs GRAILII, FEXH, and GENSCAN. The results show a considerable increase in prediction accuracy compared to the best performing single program GENSCAN</p>
|
Page generated in 0.0172 seconds