• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 239
  • 72
  • 28
  • 28
  • 18
  • 9
  • 9
  • 9
  • 6
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 488
  • 488
  • 488
  • 160
  • 136
  • 113
  • 111
  • 82
  • 78
  • 73
  • 73
  • 65
  • 63
  • 57
  • 52
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
231

Multitemporal Satellite Data for Monitoring Urbanization in Nanjing from 2001 to 2016

Cai, Zipan January 2017 (has links)
Along with the increasing rate of urbanization takes place in the world, the population keeps shifting from rural to urban areas. China, as the country of the largest population, has the highest urban population growth in Asia, as well as the world. However, the urbanization in China, in turn, is leading to a lot of social issues which reshape the living environment and cultural fabric. A variety of these kinds of social issues emphasize the challenges regarding a healthy and sustainable urban growth particularly in the reasonable planning of urban land use and land cover features. Therefore, it is significant to establish a set of comprehensive urban sustainable development strategies to avoid detours in the urbanization process. Nowadays, faced with such as a series of the social phenomenon, the spatial and temporal technological means including Remote Sensing and Geographic Information System (GIS) can be used to help the city decision maker to make the right choices. The knowledge of land use and land cover changes in the rural and urban area assists in identifying urban growth rate and trend in both qualitative and quantitatively ways, which provides more basis for planning and designing a city in a more scientific and environmentally friendly way. This paper focuses on the urban sprawl analysis in Nanjing, Jiangsu, China that being analyzed by urban growth pattern monitoring during a study period. From 2001 to 2016, Nanjing Municipality has experienced a substantial increase in the urban area because of the growing population. In this paper, one optimal supervised classification with high accuracy which is Support Vector Machine (SVM) classifier was used to extract thematic features from multitemporal satellite data including Landsat 7 ETM+, Landsat 8, and Sentinel-2A MSI. It was interpreted to identify the existence of urban sprawl pattern based on the land use and land cover features in 2001, 2006, 2011, and 2016. Two different types of change detection analysis including post-classification comparison and change vector analysis (CVA) were performed to explore the detailed extent information of urban growth within the study region. A comparison study on these two change detection analysis methods was carried out by accuracy assessment. Based on the exploration of the change detection analysis combined with the current urban development actuality, some constructive recommendations and future research directions were given at last. By implementing the proposed methods, the urban land use and land cover changes were successfully captured. The results show there is a notable change in the urban or built-up land feature. Also, the urban area is increased by 610.98 km2 while the agricultural land area is decreased by 766.96 km2, which proved a land conversion among these land cover features in the study period. The urban area keeps growing in each particular study period while the growth rate value has a decreasing trend in the period of 2001 to 2016. Besides, both change detection techniques obtained the similar result of the distribution of urban expansion in the study area. According to the result images from two change detection methods, the expanded urban or built-up land in Nanjing distributes mainly in the surrounding area of the central city area, both side of Yangtze River, and Southwest area. The results of change detection accuracy assessment indicated the post-classification comparison has a higher overall accuracy 86.11% and a higher Kappa Coefficient 0.72 than CVA. The overall accuracy and Kappa Coefficient for CVA is 75.43% and 0.51 respectively. These results proved the strength of agreement between predicted and truth data is at ‘good’ level for post-classification comparison and ‘moderate’ for CVA. Also, the results further confirmed the expectation from previous studies that the empirical threshold determination of CVA always leads to relatively poor change detection accuracy. In general, the two change detection techniques are found to be effective and efficient in monitoring surface changes in the different class of land cover features within the study period. Nevertheless, they have their advantages and disadvantages on processing change detection analysis particularly for the topic of urban expansion.
232

Non-intrusive driver drowsiness detection system.

Abas, Ashardi B. January 2011 (has links)
The development of technologies for preventing drowsiness at the wheel is a major challenge in the field of accident avoidance systems. Preventing drowsiness during driving requires a method for accurately detecting a decline in driver alertness and a method for alerting and refreshing the driver. As a detection method, the authors have developed a system that uses image processing technology to analyse images of the road lane with a video camera integrated with steering wheel angle data collection from a car simulation system. The main contribution of this study is a novel algorithm for drowsiness detection and tracking, which is based on the incorporation of information from a road vision system and vehicle performance parameters. Refinement of the algorithm is more precisely detected the level of drowsiness by the implementation of a support vector machine classification for robust and accurate drowsiness warning system. The Support Vector Machine (SVM) classification technique diminished drowsiness level by using non intrusive systems, using standard equipment sensors, aim to reduce these road accidents caused by drowsiness drivers. This detection system provides a non-contact technique for judging various levels of driver alertness and facilitates early detection of a decline in alertness during driving. The presented results are based on a selection of drowsiness database, which covers almost 60 hours of driving data collection measurements. All the parameters extracted from vehicle parameter data are collected in a driving simulator. With all the features from a real vehicle, a SVM drowsiness detection model is constructed. After several improvements, the classification results showed a very good indication of drowsiness by using those systems. / Title page is not included.
233

APP DEVELOPMENT, DATA COLLECTION AND MACHINE LEARNING IN DETERMINING MEDICINE DOSAGE FOR PARKINSON'S DISEASE

Olsson, Daniel, Eriksson, Jonathan, Soltani, Sedigheh January 2022 (has links)
Parkinson’s disease is a neurodegenerative disorder that affects approximately 0.2% of the population having motor disabilities as its most prominent feature. A symptom of the disease is lowered dopamine levels which often is countered by oral intake of a medication called Levodopa. However, for the dopamine levels to be steady, a patient would need to regularly take the medication throughout the day. As the disease and the treatment progresses, the correct medicine prescription becomes more difficult. This project is the continuation of a previous project done by students at Uppsala University, in which a Machine Learning model with the help of Support Vector Machine could classify data collected from a handheld accelerometer as the user being either under or overdosed for Parkinson’s Disease. The goal of this project was to achieve a similar result by developing a mobile app. The mobile app was supposed to allow the user to follow a path displayed on the screen with their finger, meanwhile the app would collect touch data in the form of coordinates and timestamp these. The app development proved to be successful, and the collected data was sent to a database hosted on the Google cloud service Firebase for storage. From there, the data could be downloaded and imported to MATLAB where an SVM model was set up and trained. Once trained using data collected from healthy individuals as well as patients suffering from Parkinson’s disease, the SVM could accurately differentiate between Parkinson’s disease data and healthy data with a success rate of 91.7%.
234

Duality, Derivative-Based Training Methods and Hyperparameter Optimization for Support Vector Machines

Strasdat, Nico 18 October 2023 (has links)
In this thesis we consider the application of Fenchel's duality theory and gradient-based methods for the training and hyperparameter optimization of Support Vector Machines. We show that the dualization of convex training problems is possible theoretically in a rather general formulation. For training problems following a special structure (for instance, standard training problems) we find that the resulting optimality conditions can be interpreted concretely. This approach immediately leads to the well-known notion of support vectors and a formulation of the Representer Theorem. The proposed theory is applied to several examples such that dual formulations of training problems and associated optimality conditions can be derived straightforwardly. Furthermore, we consider different formulations of the primal training problem which are equivalent under certain conditions. We also argue that the relation of the corresponding solutions to the solution of the dual training problem is not always intuitive. Based on the previous findings, we consider the application of customized optimization methods to the primal and dual training problems. A particular realization of Newton's method is derived which could be used to solve the primal training problem accurately. Moreover, we introduce a general convergence framework covering different types of decomposition methods for the solution of the dual training problem. In doing so, we are able to generalize well-known convergence results for the SMO method. Additionally, a discussion of the complexity of the SMO method and a motivation for a shrinking strategy reducing the computational effort is provided. In a last theoretical part, we consider the problem of hyperparameter optimization. We argue that this problem can be handled efficiently by means of gradient-based methods if the training problems are formulated appropriately. Finally, we evaluate the theoretical results concerning the training and hyperparameter optimization approaches practically by means of several example training problems.
235

Identifying the beginning of a kayak race using velocity signal data

Kvedaraite, Indre January 2023 (has links)
A kayak is a small watercraft that moves over the water. The kayak is propelled by a person sitting inside of the hull and paddling using a double-bladed paddle. While kayaking can be casual, it is used as a competitive sport in races and even the Olympic games. Therefore, it is important to be able to analyse athletes’ performance during the race. To study the race better, some kayaking teams and organizations have attached sensors to their kayaks. These sensors record various data, which is later used to generate performance reports. However, to generate such reports, the coach must manually pinpoint the beginning of the race because the sensors collect data before the actual race begins, which may include practice runs, warming-up sessions, or just standing and waiting position. The identification of the race start and the race sequence in the data is tedious and time-consuming work and could be automated. This project proposes an approach to identify kayak races from velocity signal data with the help of a machine learning algorithm. The proposed approach is a combination of several techniques: signal preprocessing, a machine learning algorithm, and a programmatic approach. Three machine learning algorithms were evaluated to detect the race sequence, which are Support Vector Machine (SVM), k-Nearest Neighbour (kNN), and Random Forest (RF). SVM outperformed other algorithms with an accuracy of 95%. Programmatic approach was proposed to identify the start time of the race. The average error of the proposed approach is 0.24 seconds. The proposed approach was utilized in the implemented web-based application with a user interface for coaches to automatically detect the beginning of a kayak race and race signal sequence.
236

Sentimental Analysis of CyberbullyingTweets with SVM Technique

Thanikonda, Hrushikesh, Koneti, Kavya Sree January 2023 (has links)
Background: Cyberbullying involves the use of digital technologies to harass, humiliate, or threaten individuals or groups. This form of bullying can occur on various platforms such as social media, messaging apps, gaming platforms, and mobile phones. With the outbreak of covid-19, there was a drastic increase in utilization of social media. And this upsurge was coupled with cyberbullying, making it a pressing issue that needs to be addressed. Sentiment analysis involves identifying and categorizing emotions and opinions expressed in text data using natural language processing and machine learning techniques. SVM is a machine learning algorithm that has been widely used for sentiment analysis due to its accuracy and efficiency. Objectives: The main objective of this study is to use SVM for sentiment analysis of cyberbullying tweets and evaluate its performance. The study aimed to determine the feasibility of using SVM for sentiment analysis and to assess its accuracy in detecting cyberbullying. Methods: The quantitative research method is used in this thesis, and data is analyzed using statistical analysis. The data set is from Kaggle and includes data about cyberbullying tweets. The collected data is preprocessed and used to train and test an SVM model. The created model will be evaluated on the test set using evaluation accuracy, precision, recall, and F1 score to determine the performance of the SVM model developed to detect cyberbullying. Results: The results showed that SVM is a suitable technique for sentiment analysis of cyberbullying tweets. The model had an accuracy of 82.3% in detecting cyberbullying, with a precision of 0.82, recall of 0.82, and F1-score of 0.83. Conclusions: The study demonstrates the feasibility of using SVM for sentimental analysis of cyberbullying tweets. The high accuracy of the SVM model suggests that it can be used to build automated systems for detecting cyberbullying. The findings highlight the importance of developing tools to detect and address cyberbullying in the online world. The use of sentimental analysis and SVM has the potential to make a significant contribution to the fight against cyberbullying.
237

Data mining inom tillverkningsindustrin : En fallstudie om möjligheten att förutspå kvalitetsutfall i produktionslinjer

Janson, Lisa, Mathisson, Minna January 2021 (has links)
I detta arbete har en fallstudie utförts på Volvo Group i Köping. I takt med ¨övergången till industri 4.0, ökar möjligheterna att använda maskininlärning som ett verktyg i analysen av industriell data och vidareutvecklingen av industriproduktionen. Detta arbete syftar till att undersöka möjligheten att förutspå kvalitetsutfall vid sammanpressning av nav och huvudaxel. Metoden innefattar implementering av tre maskininlärningsmodeller samt evaluering av dess prestation i förhållande till varandra. Vid applicering av modellerna på monteringsdata från fabriken erhölls ett bristfälligt resultat, vilket indikerar att det utifrån de inkluderade variablerna inte är möjligt att förutspå kvalitetsutfallet. Orsakerna som låg till grund för resultatet granskades, och det resulterade i att det förmodligen berodde på att modellerna var oförmögna att finna samband i datan eller att det inte fanns något samband i datasetet. För att avgöra vilken av dessa två faktorer som var avgörande skapades ett fabricerat dataset där tre nya variabler introducerades. De fabricerade värdena på dessa variabler skapades på sådant sätt att det fanns syntetisk kausalitet mellan två av variablerna och kvalitetsutfallet. Vid applicering av modellerna på den fabricerade datan, lyckades samtliga modeller identifiera det syntetiska sambandet. Utifrån det drogs slutsatsen att det bristfälliga resultatet inte berodde på modellernas prestation utan att det inte fanns något samband i datasetet bestående av verklig monteringsdata. Det här bidrog till bedömningen att om spårbarheten på komponenterna hade ökat i framtiden, i kombination med att fler maskiner i produktionslinjen genererade data till ett sammankopplat system, skulle denna studie kunna utföras igen, men med fler variabler och ett större dataset. Support vector machine var den modell som presterade bäst, givet de prestationsmått som användes i denna studie. Det faktum att modellerna som inkluderats i den här studien lyckades identifiera sambandet i datan, när det fanns vetskap om att sambandet existerade, motiverar användandet av dessa modeller i framtida studier. Avslutningsvis kan det konstateras att med förbättrad spårbarhet och en allt mer uppkopplad fabrik, finns det möjlighet att använda maskininlärningsmodeller som komponenter i större system för att kunna uppnå effektiviseringar. / As the adaptation towards Industry 4.0 proceeds, the possibility of using machine learning as a tool for further development of industrial production, becomes increasingly profound. In this paper, a case study has been conducted at Volvo Group in Köping, in order to investigate the wherewithals of predicting quality outcomes in the compression of hub and mainshaft. In the conduction of this study, three different machine learning models were implemented and compared amongst each other. A dataset containing data from Volvo’s production site in Köping was utilized when training and evaluating the models. However, the low evaluation scores acquired from this, indicate that the quality outcome of the compression could not be predicted given solely the variables included in that dataset. Therefore, a dataset containing three additional variables consisting of fabricated values and a known causality between two of the variables and the quality outcome, was also utilized. The purpose of this was to investigate whether the poor evaluation metrics resulted from a non-existent pattern between the included variables and the quality outcome, or from the models not being able to find the pattern. The performance of the models, when trained and evaluated on the fabricated dataset, indicate that the models were in fact able to find the pattern that was known to exist. Support vector machine was the model that performed best, given the evaluation metrics that were chosen in this study. Consequently, if the traceability of the components were to be enhanced in the future and an additional number of machines in the production line would transmit production data to a connected system, it would be possible to conduct the study again with additional variables and a larger data set. The fact that the models included in this study succeeded in finding patterns in the dataset when such patterns were known to exist, motivates the use of the same models. Furthermore, it can be concluded that with enhanced traceability of the components and a larger amount of machines transmitting production data to a connected system, there is a possibility that machine learning models could be utilized as components in larger business monitoring systems, in order to achieve efficiencies.
238

Validation and Optimization of Hyperspectral Reflectance Analysis-Based Predictive Models for the Determination of Plant Functional Traits in Cornus, Rhododendron, and Salix

Valdiviezo, Milton I 01 January 2020 (has links)
Near infrared spectroscopy (NIR) has become increasingly widespread throughout various fields as an alternative method for efficiently phenotyping crops and plants at rates unparalleled by conventional means. With growing reliability, the convergence of NIR spectroscopy and modern machine learning represent a promising methodology offering unprecedented access to rapid, high throughput phenotyping at negligible costs, representing prospects that excite agronomists and plant physiologists alike. However, as is true of all emergent methodologies, progressive refinement towards optimization exposes potential flaws and raises questions, one of which is the cornerstone of this study. Spectroscopic determination of plant functional traits utilizes plants' morphological and biochemical properties to make predictions, and has been validated at the community (inter-family) and individual crop (intraspecific) levels alike, yielding equally reliable predictions at both scales, yet what lies amid these poles on the spectrum of taxonomic scale remains unexplored territory. In this study, we replicated the protocol used in studies of the aforementioned taxonomic scale extremes and applied it to an intermediate scale. Interestingly, we found that predictive models built upon hyperspectral reflectance data collected across three genera of woody plants: Cornus, Rhododendron, and Salix, yielded inconsistent predictions of varying accuracy within and across taxa. Identifying the potential cause(s) underlying variability in predictive power at this intermediate taxonomic scale may reveal novel properties of the methodology, potentially permitting further optimization through careful consideration.
239

The Effects of Novel Feature Vectors on Metagenomic Classification

Plis, Kevin A. 24 September 2014 (has links)
No description available.
240

Predictive Analysis for Trauma Patient Readmission Database

Jiao, Weiwei 24 August 2017 (has links)
No description available.

Page generated in 0.0695 seconds