• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5599
  • 577
  • 282
  • 275
  • 167
  • 157
  • 83
  • 66
  • 50
  • 42
  • 24
  • 21
  • 20
  • 19
  • 12
  • Tagged with
  • 9044
  • 9044
  • 3029
  • 1688
  • 1534
  • 1523
  • 1417
  • 1359
  • 1193
  • 1186
  • 1158
  • 1128
  • 1114
  • 1024
  • 1021
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1281

Promises and Pitfalls of Machine Learning Classifiers for Inter-Rater Reliability Annotation

Ayres, Dorothy Lucille 03 June 2021 (has links)
No description available.
1282

Quality of SQL Code Security on StackOverflow and Methods of Prevention

Klock, Robert 29 July 2021 (has links)
No description available.
1283

Seco Analytics

Kruse, Gustav, Åhag, Lotta, Dahlback, Samuel, Åbrink, Albin January 2019 (has links)
Forecasting is a powerful tool that can enable companies to save millions in revenue every year if the forecast is good enough. The problem lies in the good enough part. Many companies today use Excel topredict their future sales and trends. While this is a start it is far from optimal. Seco Analytics aim to solve this issue by forecasting in an informative and easy manner. The web application uses the ARIMA analysis method to accurately calculate the trend given any country and product area selection. It also features external data that allow the user to compare internal data with relevant external data such as GDP and calculate the correlation given the countries and product areas selected. This thesis describes the developing process of the application Seco Analytics.
1284

Models for Pedestrian Trajectory Prediction and Navigation in Dynamic Environments

Kerfs, Jeremy N 01 May 2017 (has links)
Robots are no longer constrained to cages in factories and are increasingly taking on roles alongside humans. Before robots can accomplish their tasks in these dynamic environments, they must be able to navigate while avoiding collisions with pedestrians or other robots. Humans are able to move through crowds by anticipating the movements of other pedestrians and how their actions will influence others; developing a method for predicting pedestrian trajectories is a critical component of a robust robot navigation system. A current state-of-the-art approach for predicting pedestrian trajectories is Social-LSTM, which is a recurrent neural network that incorporates information about neighboring pedestrians to learn how people move cooperatively around each other. This thesis extends and modifies that model to output parameters for a multimodal distribution, which better captures the uncertainty inherent in pedestrian movements. Additionally, four novel architectures for representing neighboring pedestrians are proposed; these models are more general than current trajectory prediction systems and have fewer hyper-parameters. In both simulations and real-world datasets, the multimodal extension significantly increases the accuracy of trajectory prediction. One of the new neighbor representation architectures achieves state-of-the-art results while reducing the number of both parameters and hyper-parameters compared to existing solutions. Two techniques for incorporating the trajectory predictions into a planning system are also developed and evaluated on a real-world dataset. Both techniques plan routes that include fewer near-collisions than algorithms that do not use trajectory predictions. Finally, a Python library for Agent-Based-Modeling and crowd simulation is presented to aid in future research.
1285

Designing Robust Trust Establishment Models with a Generalized Architecture and a Cluster-Based Improvement Methodology

Templeton, Julian 18 August 2021 (has links)
In Multi-Agent Systems consisting of intelligent agents that interact with one another, where the agents are software entities which represent individuals or organizations, it is important for the agents to be equipped with trust evaluation models which allow the agents to evaluate the trustworthiness of other agents when dishonest agents may exist in an environment. Evaluating trust allows agents to find and select reliable interaction partners in an environment. Thus, the cost incurred by an agent for establishing trust in an environment can be compensated if this improved trustworthiness leads to an increased number of profitable transactions. Therefore, it is equally important to design effective trust establishment models which allow an agent to generate trust among other agents in an environment. This thesis focuses on providing improvements to the designs of existing and future trust establishment models. Robust trust establishment models, such as the Integrated Trust Establishment (ITE) model, may use dynamically updated variables to adjust the predicted importance of a task’s criteria for specific trustors. This thesis proposes a cluster-based approach to update these dynamic variables more accurately to achieve improved trust establishment performance. Rather than sharing these dynamic variables globally, a model can learn to adjust a trustee’s behaviours more accurately to trustor needs by storing the variables locally for each trustor and by updating groups of these variables together by using data from a corresponding group of similar trustors. This work also presents a generalized trust establishment model architecture to help models be easier to design and be more modular. This architecture introduces a new transaction-level preprocessing module to help improve a model’s performance and defines a trustor-level postprocessing module to encapsulate the designs of existing models. The preprocessing module allows a model to fine-tune the resources that an agent will provide during a transaction before it occurs. A trust establishment model, named the Generalized Trust Establishment Model (GTEM), is designed to showcase the benefits of using the preprocessing module. Simulated comparisons between a cluster-based version of ITE and ITE indicate that the cluster-based approach helps trustees better meet the expectations of trustors while minimizing the cost of doing so. Comparing GTEM to itself without the preprocessing module and to two existing models in simulated tests exhibits that the preprocessing module improves a trustee’s trustworthiness and better meets trustor desires at a faster rate than without using preprocessing.
1286

Multi-Objective Heterogeneous Multi-Asset Collection Scheduling Optimization with High-Level Information Fusion

Muteba Kande, Joel 18 August 2021 (has links)
Surveillance of areas of interest through image acquisition is becoming increasingly essential for intelligence services. Several types of platforms equipped with sensors are used to collect good quality images of the areas to be monitored. The evolution of this field has different levels: some studies are only based on improving the quality of the images acquired through sensors, others on the efficiency of platforms such as satellites, aircraft and vessels which will navigate the areas of interest and yet others are based on the optimization of the trajectory of these platforms. Apart from these, intelligence organizations demonstrate an interest in carrying out such missions by sharing their resources. This thesis presents a framework whose main objective is to allow intelligence organizations to carry out their observation missions by pooling their platforms with other organizations having similar or geographically close targets. This framework will use Multi-Objective Optimization algorithms based on genetic algorithms to optimize such mission planning. Research on sensor fusion will be a key point to this thesis, researchers have proven that an image resulting from the fusion of two images from different sensors can provide more information compared to the original images. Given that the main goal for observation missions is to collect quality imagery, this work will also use High-Level Information Fusion to optimize mission planning based on image quality and fusion. The results of the experiments not only demonstrate the added value of this framework but also highlight its strengths (through performance metrics) as compared to other similar frameworks.
1287

Rozpoznání zvukových událostí pomocí hlubokého učení / Deep learning based sound event recognition

Bajzík, Jakub January 2019 (has links)
This paper deals with processing and recognition of events in audio signal. The work explores the possibility of using audio signal visualization and subsequent use of convolutional neural networks as a classifier for recognition in real use. Recognized audio events are gunshots placed in a sound background such as street noise, human voice, animal sounds, and other forms of random noise. Before the implementation, a large database with various parameters, especially reverberation and time positioning within the processed section, is created. In this work are used freely available platforms Keras and TensorFlow for work with neural networks.
1288

Strojové učení pro odpovídání na otázky v češtině / Machine Learning for Question Answering in Czech

Pastorek, Peter January 2020 (has links)
This Master's thesis deals with teaching neural network question answering in Czech. Neural networks are created in Python programming language using the PyTorch library. They are created based on the LSTM structure. They are trained on the Czech SQAD dataset. Because Czech data set is smaller than the English data sets, I opted to extend neural networks with algorithmic procedures. For easier application of algorithmic procedures and better accuracy, I divide question answering into smaller parts.
1289

Optimal Seeding Rates for New Hard Red Spring Wheat Cultivars in Diverse Environments

Stanley, Jordan D. January 2019 (has links)
Seeding rate in hard red spring wheat (HRSW) (Triticum aestivum L.) production impacts input cost and grain yield. Predicting the optimal seeding rate (OSR) for HRSW cultivars can aid growers and eliminate the need for costly seeding rate research. Research was conducted to determine the OSR of newer HRSW cultivars (released in 2013 or later) in diverse environments. Nine cultivars with diverse genetic and phenotypic characteristics were evaluated at four seeding rates in 11 environments throughout the northern Great Plains region in 2017-2018. Results from ANOVA indicated environment and cultivar were more important than seeding rate in determining grain yield. Though there was no environment x seeding rate interaction (P=0.37), OSR varied among cultivar within each environment. Cultivar x environment interactions were further explored with the objective of developing a decision support system (DSS) to aid growers in determining the OSR for the cultivar they select, and for the environment in which it is sown. Data from seeding rate trials conducted in ND and MN from 2013-2015 were also used. A novel method for characterizing cultivar for tillering capacity was developed and proposed as a source for information on tillering to be used in statistical modelling. A 10-fold repeated cross-validation of the seeding rate data was analyzed by 10 statistical learning algorithms to determine a model for predicting OSR of newer cultivars. Models were similar in prediction accuracy (P=0.10). The decision tree model was considered the most reliable as bias was minimized by pruning methods, and model variance was acceptable for OSR predictions (RMSE=1.24). Findings from this model were used to develop the grower DSS for determining OSR dependent on cultivar straw strength, tillering capacity, and yield of the environment. Recommendations for OSR ranged from 3.1 to 4.5 million seeds ha-1. Growers can benefit from using this DSS by sowing at OSR relative to their average yields; especially when seeding new HRSW cultivars.
1290

Extracting Useful Information and Building Predictive Models from Medical and Health-Care Data Using Machine Learning Techniques

Kabir, Md Faisal January 2020 (has links)
In healthcare, a large number of medical data has emerged. To effectively use these data to improve healthcare outcomes, clinicians need to identify the relevant measures and apply the correct analysis methods for the type of data at hand. In this dissertation, we present various machine learning (ML) and data mining (DM) methods that could be applied to the type of data sets that are available in the healthcare area. The first part of the dissertation investigates DM methods on healthcare or medical data to find significant information in the form of rules. Class association rule mining, a variant of association rule mining, was used to obtain the rules with some targeted items or class labels. These rules can be used to improve public awareness of different cancer symptoms and could also be useful to initiate prevention strategies. In the second part of the thesis, ML techniques have been applied in healthcare or medical data to build a predictive model. Three different classification techniques on a real-world breast cancer risk factor data set have been investigated. Due to the imbalance characteristics of the data set various resampling methods were used before applying the classifiers. It is shown that there was a significant improvement in performance when applying a resampling technique as compared to applying no resampling technique. Moreover, super learning technique that uses multiple base learners, have been investigated to boost the performance of classification models. Two different forms of super learner have been investigated - the first one uses two base learners while the second one uses three base learners. The models were then evaluated against well-known benchmark data sets related to the healthcare domain and the results showed that the SL model performs better than the individual classifier and the baseline ensemble. Finally, we assessed cancer-relevant genes of prostate cancer with the most significant correlations with the clinical outcome of the sample type and the overall survival. Rules from the RNA-sequencing of prostate cancer patients was discovered. Moreover, we built the regression model and from the model rules for predicting the survival time of patients were generated.

Page generated in 0.1385 seconds