• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 4
  • 1
  • Tagged with
  • 13
  • 13
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Quantificação da glicemia através de análises de imagens da íris humana utilizando redes neurais / Blood glucose rate measured through the analysis of the human iris image, using neural networking

Alves, Deise Mota 28 August 2007 (has links)
Este trabalho contribui para o projeto de um sistema não-invasivo capaz de quantificar o nível de glicose no sangue através de imagens da íris humana, o projeto recebeu o nome de GlucoÍris. Este foi desenvolvido pelo Departamento de Engenharia Mecânica (LabMetro), da Universidade Federal de Santa Catarina (UFSC) onde foi concebido e avaliado um sistema óptico/mecânico e um programa de computador para extrair parâmetros quantitativos associados à coloração e estrutura da íris humana. Um primeiro protótipo de dispositivo capaz de adquirir imagens digitais coloridas da íris foi desenvolvido juntamente com uma primeira versão de um programa de computador. Alterações na íris com 24 voluntários foram avaliadas. Os resultados atingidos em trabalhos anteriores mostraram que, de fato, a cor da imagem de uma íris sofre alterações em função de variações no nível de glicose na corrente sanguínea, indicando que é possível medir a glicemia através da íris humana. Partindo-se dos resultados das fases anteriores do projeto, este trabalho se dedicou em desenvolver um sistema, utilizando redes neurais, para se fazer uma estimação/previsão do valor de glicemia através de análises de imagens da íris humana. Com os dados de cor extraídos das imagens e os valores de glicemia conhecidos, para os casos estudados, avaliou-se a capacidade da rede em estimar novos valores de glicemia para os voluntários em questão. / This work contributes to the project of a non-invasive system which is able to quantify the glucose level in the blood through the human iris images; the project was named GlucoÍris. It was developed by the mechanical engineering department (LabMetro) of the Federal University of Santa Catarina (UFSC) where it was placed and analyzed an optical/mechanical system. Also, it was created a computer program to find out quantitative parameters associated with the color and the structure of the human iris. Together with the prime version of a computer program it was developed an earliest prototype of the apparatus which was able to obtain digital colored images of the iris. The alterations found out in the iris of the twenty-four volunteers where evaluated. Actually, due to the variations of the glucose level in the blood, previous studies showed that the color of the image of an iris experiences changes, indicating that it is possible to measure the sugar rate in the blood through the human iris. Applying the results of previous stages of the project, the purpose of this study was to develop a system, using neural networking, to estimate the blood glucose rate through the analysis of the images of the human iris. Employing the color data found in the images and the known blood glucose amount on the studied cases, it was evaluated the networking ability to estimate new sugar rates in the mentioned volunteers\'s blood.
2

Quantificação da glicemia através de análises de imagens da íris humana utilizando redes neurais / Blood glucose rate measured through the analysis of the human iris image, using neural networking

Deise Mota Alves 28 August 2007 (has links)
Este trabalho contribui para o projeto de um sistema não-invasivo capaz de quantificar o nível de glicose no sangue através de imagens da íris humana, o projeto recebeu o nome de GlucoÍris. Este foi desenvolvido pelo Departamento de Engenharia Mecânica (LabMetro), da Universidade Federal de Santa Catarina (UFSC) onde foi concebido e avaliado um sistema óptico/mecânico e um programa de computador para extrair parâmetros quantitativos associados à coloração e estrutura da íris humana. Um primeiro protótipo de dispositivo capaz de adquirir imagens digitais coloridas da íris foi desenvolvido juntamente com uma primeira versão de um programa de computador. Alterações na íris com 24 voluntários foram avaliadas. Os resultados atingidos em trabalhos anteriores mostraram que, de fato, a cor da imagem de uma íris sofre alterações em função de variações no nível de glicose na corrente sanguínea, indicando que é possível medir a glicemia através da íris humana. Partindo-se dos resultados das fases anteriores do projeto, este trabalho se dedicou em desenvolver um sistema, utilizando redes neurais, para se fazer uma estimação/previsão do valor de glicemia através de análises de imagens da íris humana. Com os dados de cor extraídos das imagens e os valores de glicemia conhecidos, para os casos estudados, avaliou-se a capacidade da rede em estimar novos valores de glicemia para os voluntários em questão. / This work contributes to the project of a non-invasive system which is able to quantify the glucose level in the blood through the human iris images; the project was named GlucoÍris. It was developed by the mechanical engineering department (LabMetro) of the Federal University of Santa Catarina (UFSC) where it was placed and analyzed an optical/mechanical system. Also, it was created a computer program to find out quantitative parameters associated with the color and the structure of the human iris. Together with the prime version of a computer program it was developed an earliest prototype of the apparatus which was able to obtain digital colored images of the iris. The alterations found out in the iris of the twenty-four volunteers where evaluated. Actually, due to the variations of the glucose level in the blood, previous studies showed that the color of the image of an iris experiences changes, indicating that it is possible to measure the sugar rate in the blood through the human iris. Applying the results of previous stages of the project, the purpose of this study was to develop a system, using neural networking, to estimate the blood glucose rate through the analysis of the images of the human iris. Employing the color data found in the images and the known blood glucose amount on the studied cases, it was evaluated the networking ability to estimate new sugar rates in the mentioned volunteers\'s blood.
3

Data-driven Approach to Predict the Static and Fatigue Properties of Additively Manufactured Ti-6Al-4V

January 2020 (has links)
abstract: Additive manufacturing (AM) has been extensively investigated in recent years to explore its application in a wide range of engineering functionalities, such as mechanical, acoustic, thermal, and electrical properties. The proposed study focuses on the data-driven approach to predict the mechanical properties of additively manufactured metals, specifically Ti-6Al-4V. Extensive data for Ti-6Al-4V using three different Powder Bed Fusion (PBF) additive manufacturing processes: Selective Laser Melting (SLM), Electron Beam Melting (EBM), and Direct Metal Laser Sintering (DMLS) are collected from the open literature. The data is used to develop models to estimate the mechanical properties of Ti-6Al-4V. For this purpose, two models are developed which relate the fabrication process parameters to the static and fatigue properties of the AM Ti-6Al-4V. To identify the behavior of the relationship between the input and output parameters, each of the models is developed on both linear multi-regression analysis and non-linear Artificial Neural Network (ANN) based on Bayesian regularization. Uncertainties associated with the performance prediction and sensitivity with respect to processing parameters are investigated. Extensive sensitivity studies are performed to identify the important factors for future optimal design. Some conclusions and future work are drawn based on the proposed study with investigated material. / Dissertation/Thesis / Masters Thesis Mechanical Engineering 2020
4

DEVELOPMENT OF MACHINE LEARNING TECHNIQUES FOR APPLICATIONS IN THE STEEL INDUSTRY

Alex Joseph Raynor (8812160) 08 May 2020 (has links)
<div>For a long time, the collection of data through sensors and other means was seen as inconsequential. However, with the somewhat recent developments in the areas of machine learning, data science, and statistical analysis, as well as in the rapid growth of computational power being allotted by the ever-expanding computer industry, data is not just being seen as secondhand information anymore. Data collection is showing that it currently is and will continue to be a major driving force in many applications, as the predictive power it can provide is invaluable. One such area that could benefit dramatically from the use of predictive techniques is the steel industry. This thesis applied several machine learning techniques to predict steel deformation issues collectively known as the hook index problem [1].</div><div><br></div><div>The first machine learning technique utilized in this endeavor was neural networking. The neural networks built and tested in this research saw the use of classification and regression prediction models. They also implemented the algorithms of gradient descent and adaptive moment estimation. Through the employment of these networks and learning strategies, as well as through the line process data, regression-based networks made predictions with average percent error ranging from 106-114%. In similar performance to the regression-based networks, classification-based networks made predictions with average accuracy percentage ranges of 38-40%.</div><div><br></div><div>To remedy the problems relating to neural networks, Bayesian networking techniques were implemented. The main method that was used as a model for these networks was the Naïve Bayesian framework. Also, variable optimization techniques were utilized to create well-performing network structures. In the same vein as the neural networks, Bayesian networks used line process data to make predictions. The classification-based networks made predictions with average accuracy ranges of 64-65%. Because of the increased accuracy results and their ability to draw causal reasoning from data, Bayesian networking was the preferred machine learning technique for this research application.</div>
5

Security related self-protected networks: Autonomous threat detection and response (ATDR)

Havenga, Wessel Johannes Jacobus January 2021 (has links)
>Magister Scientiae - MSc / Cybersecurity defense tools, techniques and methodologies are constantly faced with increasing challenges including the evolution of highly intelligent and powerful new-generation threats. The main challenges posed by these modern digital multi-vector attacks is their ability to adapt with machine learning. Research shows that many existing defense systems fail to provide adequate protection against these latest threats. Hence, there is an ever-growing need for self-learning technologies that can autonomously adjust according to the behaviour and patterns of the offensive actors and systems. The accuracy and effectiveness of existing methods are dependent on decision making and manual input by human experts. This dependence causes 1) administration overhead, 2) variable and potentially limited accuracy and 3) delayed response time.
6

Heterogeneous networking for beyond 3G system in a high-speed train environment : investigation of handover procedures in a high-speed train environment and adoption of a pattern classification neural-networks approach for handover management

Ong, Felicia Li Chin January 2016 (has links)
Based on the targets outlined by the EU Horizon 2020 (H2020) framework, it is expected that heterogeneous networking will play a crucial role in delivering seamless end-to-end ubiquitous Internet access for users. In due course, the current GSM-Railway (GSM-R) will be deemed unsustainable, as the demand for packet-oriented services continues to increase. Therefore, the opportunity to identify a plausible replacement system conducted in this research study is timely and appropriate. In this research study, a hybrid satellite and terrestrial network for enabling ubiquitous Internet access in a high-speed train environment is investigated. The study focuses on the mobility management aspect of the system, primarily related to the handover management. A proposed handover strategy, employing the RACE II MONET and ITU-T Q.65 design methodology, will be addressed. This includes identifying the functional model (FM) which is then mapped to the functional architecture (FUA), based on the Q.1711 IMT-2000 FM. In addition, the signalling protocols, information flows and message format based on the adopted design methodology will also be specified. The approach is then simulated in OPNET and the findings are then presented and discussed. The opportunity of exploring the prospect of employing neural networks (NN) for handover is also undertaken. This study focuses specifically on the use of pattern classification neural networks to aid in the handover process, which is then simulated in MATLAB. The simulation outcomes demonstrated the effectiveness and appropriateness of the NN algorithm and the competence of the algorithm in facilitating the handover process.
7

Security related self-protected networks: autonomous threat detection and response (ATDR)

Havenga, Wessel Johannes Jacobus January 2021 (has links)
Doctor Educationis / Cybersecurity defense tools, techniques and methodologies are constantly faced with increasing challenges including the evolution of highly intelligent and powerful new generation threats. The main challenges posed by these modern digital multi-vector attacks is their ability to adapt with machine learning. Research shows that many existing defense systems fail to provide adequate protection against these latest threats. Hence, there is an ever-growing need for self-learning technologies that can autonomously adjust according to the behaviour and patterns of the offensive actors and systems. The accuracy and effectiveness of existing methods are dependent on decision making and manual input by human expert. This dependence causes 1) administration overhead, 2) variable and potentially limited accuracy and 3) delayed response time. In this thesis, Autonomous Threat Detection and Response (ATDR) is a proposed general method aimed at contributing toward security related self-protected networks. Through a combination of unsupervised machine learning and Deep learning, ATDR is designed as an intelligent and autonomous decision-making system that uses big data processing requirements and data frame pattern identification layers to learn sequences of patterns and derive real-time data formations. This system enhances threat detection and response capabilities, accuracy and speed. Research provided a solid foundation for the proposed method around the scope of existing methods and the unanimous problem statements and findings by other authors.
8

Heterogeneous Networking for Beyond 3G system in a High-Speed Train Environment. Investigation of handover procedures in a high-speed train environment and adoption of a pattern classification neural-networks approach for handover management

Ong, Felicia Li Chin January 2016 (has links)
Based on the targets outlined by the EU Horizon 2020 (H2020) framework, it is expected that heterogeneous networking will play a crucial role in delivering seamless end-to-end ubiquitous Internet access for users. In due course, the current GSM-Railway (GSM-R) will be deemed unsustainable, as the demand for packet-oriented services continues to increase. Therefore, the opportunity to identify a plausible replacement system conducted in this research study is timely and appropriate. In this research study, a hybrid satellite and terrestrial network for enabling ubiquitous Internet access in a high-speed train environment is investigated. The study focuses on the mobility management aspect of the system, primarily related to the handover management. A proposed handover strategy, employing the RACE II MONET and ITU-T Q.65 design methodology, will be addressed. This includes identifying the functional model (FM) which is then mapped to the functional architecture (FUA), based on the Q.1711 IMT-2000 FM. In addition, the signalling protocols, information flows and message format based on the adopted design methodology will also be specified. The approach is then simulated in OPNET and the findings are then presented and discussed. The opportunity of exploring the prospect of employing neural networks (NN) for handover is also undertaken. This study focuses specifically on the use of pattern classification neural networks to aid in the handover process, which is then simulated in MATLAB. The simulation outcomes demonstrated the effectiveness and appropriateness of the NN algorithm and the competence of the algorithm in facilitating the handover process.
9

Optical Satellite/Component Tracking and Classification via Synthetic CNN Image Processing for Hardware-in-the-Loop testing and validation of Space Applications using free flying drone platforms

Peterson, Marco Anthony 21 April 2022 (has links)
The proliferation of reusable space vehicles has fundamentally changed how we inject assets into orbit and beyond, increasing the reliability and frequency of launches. Leading to the rapid development and adoption of new technologies into the Aerospace sector, such as computer vision (CV), machine learning (ML), and distributed networking. All these technologies are necessary to enable genuinely autonomous decision-making for space-borne platforms as our spacecraft travel further into the solar system, and our missions sets become more ambitious, requiring true ``human out of the loop" solutions for a wide range of engineering and operational problem sets. Deployment of systems proficient at classifying, tracking, capturing, and ultimately manipulating orbital assets and components for maintenance and assembly in the persistent dynamic environment of space and on the surface of other celestial bodies, tasks commonly referred to as On-Orbit Servicing and In Space Assembly, have a unique automation potential. Given the inherent dangers of manned space flight/extravehicular activity (EVAs) methods currently employed to perform spacecraft construction and maintenance tasking, coupled with the current limitation of long-duration human flight outside of low earth orbit, space robotics armed with generalized sensing and control machine learning architectures is a tremendous enabling technology. However, the large amounts of sensor data required to adequately train neural networks for these space domain tasks are either limited or non-existent, requiring alternate means of data collection/generation. Additionally, the wide-scale tools and methodologies required for hardware in the loop simulation, testing, and validation of these new technologies outside of multimillion-dollar facilities are largely in their developmental stages. This dissertation proposes a novel approach for simulating space-based computer vision sensing and robotic control using both physical and virtual reality testing environments. This methodology is designed to both be affordable and expandable, enabling hardware in the loop simulation and validation of space systems at large scale across multiple institutions. While the focus of the specific computer vision models in this paper are narrowly focused on solving imagery problems found on orbit, this work can be expanded to solve any problem set that requires robust onboard computer vision, robotic manipulation, and free flight capabilities. / Doctor of Philosophy / The lack of real-world imagery of space assets and planetary surfaces required to train neural networks to autonomously identify, classify, and perform decision-making in these environments is either limited, none existent, or prohibitively expensive to obtain. Leveraging the power of the unreal engine, motion capture, and theatre projections technologies combined with robotics, computer vision, and machine learning to provide a means to recreate these worlds for the purpose of optical machine learning testing and validation for space and other celestial applications. This dissertation also incorporates domain randomization methods to increase neural network performance for the above mentioned applications.
10

Value at Risk no mercado financeiro internacional: avaliação da performance dos modelos nos países desenvolvidos e emergentes / Value at Risk in international finance: evaluation of the models performance in developed and emerging countries

Gaio, Luiz Eduardo 01 April 2015 (has links)
Diante das exigências estipuladas pelos órgãos reguladores pelos acordos internacionais, tendo em vistas as inúmeras crises financeiras ocorridas nos últimos séculos, as instituições financeiras desenvolveram diversas ferramentas para a mensuração e controle do risco inerente aos negócios. Apesar da crescente evolução das metodologias de cálculo e mensuração do risco, o Value at Risk (VaR) se tornou referência como ferramenta de estimação do risco de mercado. Nos últimos anos novas técnicas de cálculo do Value at Risk (VaR) vêm sendo desenvolvidas. Porém, nenhuma tem sido considerada como a que melhor ajusta os riscos para diversos mercados e em diferentes momentos. Não existe na literatura um modelo conciso e coerente com as diversidades dos mercados. Assim, o presente trabalho tem por objetivo geral avaliar os estimadores de risco de mercado, gerados pela aplicação de modelos baseados no Value at Risk (VaR), aplicados aos índices das principais bolsas dos países desenvolvidos e emergentes, para os períodos normais e de crise financeira, de modo a apurar os mais efetivos nessa função. Foram considerados no estudo os modelos VaR Não condicional, pelos modelos tradicionais (Simulação Histórica, Delta-Normal e t-Student) e baseados na Teoria de Valores Extremos; o VaR Condicional, comparando os modelos da família ARCH e Riskmetrics e o VaR Multivariado, com os modelos GARCH bivariados (Vech, Bekk e CCC), funções cópulas (t-Student, Clayton, Frank e Gumbel) e por Redes Neurais Artificiais. A base de dados utilizada refere-se as amostras diárias dos retornos dos principais índices de ações dos países desenvolvidos (Alemanha, Estados Unidos, França, Reino Unido e Japão) e emergentes (Brasil, Rússia, Índia, China e África do Sul), no período de 1995 a 2013, contemplando as crises de 1997 e 2008. Os resultados do estudo foram, de certa forma, distintos das premissas iniciais estabelecidas pelas hipóteses de pesquisa. Diante de mais de mil modelagens realizadas, os modelos condicionais foram superiores aos não condicionais, na maioria dos casos. Em específico o modelo GARCH (1,1), tradicional na literatura, teve uma efetividade de ajuste de 93% dos casos. Para a análise Multivariada, não foi possível definir um modelo mais assertivo. Os modelos Vech, Bekk e Cópula - Clayton tiveram desempenho semelhantes, com bons ajustes em 100% dos testes. Diferentemente do que era esperado, não foi possível perceber diferenças significativas entre os ajustes para países desenvolvidos e emergentes e os momentos de crise e normal. O estudo contribuiu na percepção de que os modelos utilizados pelas instituições financeiras não são os que apresentam melhores resultados na estimação dos riscos de mercado, mesmo sendo recomendados pelas instituições renomadas. Cabe uma análise mais profunda sobre o desempenho dos estimadores de riscos, utilizando simulações com as carteiras de cada instituição financeira. / Given the requirements stipulated by regulatory agencies for international agreements, in considering the numerous financial crises in the last centuries, financial institutions have developed several tools to measure and control the risk of the business. Despite the growing evolution of the methodologies of calculation and measurement of Value at Risk (VaR) has become a reference tool as estimate market risk. In recent years new calculation techniques of Value at Risk (VaR) have been developed. However, none has been considered the one that best fits the risks for different markets and in different times. There is no literature in a concise and coherent model with the diversity of markets. Thus, this work has the objective to assess the market risk estimates generated by the application of models based on Value at Risk (VaR), applied to the indices of the major stock exchanges in developed and emerging countries, for normal and crisis periods financial, in order to ascertain the most effective in that role. Were considered in the study models conditional VaR, the conventional models (Historical Simulation, Delta-Normal and Student t test) and based on Extreme Value Theory; Conditional VaR by comparing the models of ARCH family and RiskMetrics and the Multivariate VaR, with bivariate GARCH (VECH, Bekk and CCC), copula functions (Student t, Clayton, Frank and Gumbel) and Artificial Neural Networks. The database used refers to the daily samples of the returns of major stock indexes of developed countries (Germany, USA, France, UK and Japan) and emerging (Brazil, Russia, India, China and South Africa) from 1995 to 2013, covering the crisis in 1997 and 2008. The results were somewhat different from the initial premises established by the research hypotheses. Before more than 1 mil modeling performed, the conditional models were superior to non-contingent, in the majority of cases. In particular the GARCH (1,1) model, traditional literature, had a 93% adjustment effectiveness of cases. For multivariate analysis, it was not possible to set a more assertive style. VECH models, and Bekk, Copula - Clayton had similar performance with good fits to 100% of the tests. Unlike what was expected, it was not possible to see significant differences between the settings for developed and emerging countries and the moments of crisis and normal. The study contributed to the perception that the models used by financial institutions are not the best performing in the estimation of market risk, even if recommended by renowned institutions. It is a deeper analysis on the performance of the estimators of risk, using simulations with the portfolios of each financial institution.

Page generated in 0.066 seconds