• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 49
  • 1
  • Tagged with
  • 246
  • 246
  • 246
  • 122
  • 91
  • 89
  • 64
  • 44
  • 38
  • 38
  • 36
  • 32
  • 31
  • 29
  • 29
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
51

Writing for Each Other: Dynamic Quest Generation Using in Session Player Behaviors in Mmorpg

Mendonca, Sean Christopher 01 June 2020 (has links) (PDF)
Role-playing games (RPGs) rely on interesting and varied experiences to maintain player attention. These experiences are often provided through quests, which give players tasks that are used to advance stories or events unfolding in the game. Traditional quests in video games require very specific conditions to be met, and for participating members to advance them by carrying out pre-defined actions. These types of quests are generated with perfect knowledge of the game world and are able to force desired behaviors out of the relevant non-player characters (NPCs). This becomes a major issue in massive multiplayer online (MMO) when other players can often disrupt the conditions needed for quests to unfold in a believable and immersive way, leading to the absence of a genuine multiplayer RPG experience. Our proposed solution is to dynamically create quests from real-time information on the unscripted actions of other NPCs and players in a game. This thesis shows that it is possible to create logical quests without global information knowledge, pre-defined story-trees, or prescribed player and NPC behavior. This allows players to become involved in storylines without having to perform any specific actions. Results are shown through a game scenario created from the Panoptyk Engine, a game engine in early development designed to test AI reasoning with information and the removal of the distinction between NPC and human players. We focus on quests issued by the NPC faction leaders of several in-game groups known as factions. Our generated quests are created logically from the pre-defined personality of each NPC leader, their memory of previous events, and information given to them by in-game sources. Long-spanning conflicts are seen to emerge from factions issuing quests against each other; these conflicts can be represented in a coherent narrative. A user study shows that players felt quests were logical, that players were able to recognize quests were based on events happening in the game, and that players experienced follow-up consequences from their actions in quests.
52

Autonomous Robot Skill Acquisition

Konidaris, George D 13 May 2011 (has links)
Among the most impressive of aspects of human intelligence is skill acquisition—the ability to identify important behavioral components, retain them as skills, refine them through practice, and apply them in new task contexts. Skill acquisition underlies both our ability to choose to spend time and effort to specialize at particular tasks, and our ability to collect and exploit previous experience to become able to solve harder and harder problems over time with less and less cognitive effort. Hierarchical reinforcement learning provides a theoretical basis for skill acquisition, including principled methods for learning new skills and deploying them during problem solving. However, existing work focuses largely on small, discrete problems. This dissertation addresses the question of how we scale such methods up to high-dimensional, continuous domains, in order to design robots that are able to acquire skills autonomously. This presents three major challenges; we introduce novel methods addressing each of these challenges. First, how does an agent operating in a continuous environment discover skills? Although the literature contains several methods for skill discovery in discrete environments, it offers none for the general continuous case. We introduce skill chaining, a general skill discovery method for continuous domains. Skill chaining incrementally builds a skill tree that allows an agent to reach a solution state from any of its start states by executing a sequence (or chain) of acquired skills. We empirically demonstrate that skill chaining can improve performance over monolithic policy learning in the Pinball domain, a challenging dynamic and continuous reinforcement learning problem. Second, how do we scale up to high-dimensional state spaces? While learning in relatively small domains is generally feasible, it becomes exponentially harder as the number of state variables grows. We introduce abstraction selection, an efficient algorithm for selecting skill-specific, compact representations from a library of available representations when creating a new skill. Abstraction selection can be combined with skill chaining to solve hard tasks by breaking them up into chains of skills, each defined using an appropriate abstraction. We show that abstraction selection selects an appropriate representation for a new skill using very little sample data, and that this leads to significant performance improvements in the Continuous Playroom, a relatively high-dimensional reinforcement learning problem. Finally, how do we obtain good initial policies? The amount of experience required to learn a reasonable policy from scratch in most interesting domains is unrealistic for robots operating in the real world. We introduce CST, an algorithm for rapidly constructing skill trees (with appropriate abstractions) from sample trajectories obtained via human demonstration, a feedback controller, or a planner. We use CST to construct skill trees from human demonstration in the Pinball domain, and to extract a sequence of low-dimensional skills from demonstration trajectories on a mobile robot. The resulting skills can be reliably reproduced using a small number of example trajectories. Finally, these techniques are applied to build a mobile robot control system for the uBot-5, resulting in a mobile robot that is able to acquire skills autonomously. We demonstrate that this system is able to use skills acquired in one problem to more quickly solve a new problem.
53

Query-Time Optimization Techniques for Structured Queries in Information Retrieval

Cartright, Marc-Allen 01 September 2013 (has links)
The use of information retrieval (IR) systems is evolving towards larger, more complicated queries. Both the IR industrial and research communities have generated significant evidence indicating that in order to continue improving retrieval effectiveness, increases in retrieval model complexity may be unavoidable. From an operational perspective, this translates into an increasing computational cost to generate the final ranked list in response to a query. Therefore we encounter an increasing tension in the trade-off between retrieval effectiveness (quality of result list) and efficiency (the speed at which the list is generated). This tension creates a strong need for optimization techniques to improve the efficiency of ranking with respect to these more complex retrieval models This thesis presents three new optimization techniques designed to deal with different aspects of structured queries. The first technique involves manipulation of interpolated subqueries, a common structure found across a large number of retrieval models today. We then develop an alternative scoring formulation to make retrieval models more responsive to dynamic pruning techniques. The last technique is delayed execution, which focuses on the class of queries that utilize term dependencies and term conjunction operations. In each case, we empirically show that these optimizations can significantly improve query processing efficiency without negatively impacting retrieval effectiveness. Additionally, we implement these optimizations in the context of a new retrieval system known as Julien. As opposed to implementing these techniques as one-off solutions hard-wired to specific retrieval models, we treat each technique as a ``behavioral'' extension to the original system. This allows us to flexibly stack the modifications to use the optimizations in conjunction, increasing efficiency even further. By focusing on the behaviors of the objects involved in the retrieval process instead of on the details of the retrieval algorithm itself, we can recast these techniques to be applied only when the conditions are appropriate. Finally, the modular design of these components illustrates a system design that allows improvements to be implemented without disturbing the existing retrieval infrastructure.
54

Phenotyping cotton compactness using machine learning and UAS multispectral imagery

Waldbieser, Joshua Carl 08 December 2023 (has links) (PDF)
Breeding compact cotton plants is desirable for many reasons, but current research for this is restricted by manual data collection. Using unmanned aircraft system imagery shows potential for high-throughput automation of this process. Using multispectral orthomosaics and ground truth measurements, I developed supervised models with a wide range of hyperparameters to predict three compactness traits. Extreme gradient boosting using a feature matrix as input was able to predict the height-related metric with R2=0.829 and RMSE=0.331. The breadth metrics require higher-detailed data and more complex models to predict accurately.
55

Wildfire Risk Assessment Using Convolutional Neural Networks and Modis Climate Data

Nesbit, Sean F 01 June 2022 (has links) (PDF)
Wildfires burn millions of acres of land each year leading to the destruction of homes and wildland ecosystems while costing governments billions in funding. As climate change intensifies drought volatility across the Western United States, wildfires are likely to become increasingly severe. Wildfire risk assessment and hazard maps are currently employed by fire services, but can often be outdated. This paper introduces an image-based dataset using climate and wildfire data from NASA’s Moderate Resolution Imaging Spectroradiometer (MODIS). The dataset consists of 32 climate and topographical layers captured across 0.1 deg by 0.1 deg tiled regions in California and Nevada between 2015 and 2020, associated with whether the region later saw a wildfire incident. We trained a convolutional neural network (CNN) with the generated dataset to predict whether a region will see a wildfire incident given the climate data of that region. Convolutional neural networks are able to find spatial patterns in their multi-dimensional inputs, providing an additional layer of inference when compared to logistic regression (LR) or artificial neural network (ANN) models. To further understand feature importance, we performed an ablation study, concluding that vegetation products, fire history, water content, and evapotranspiration products resulted in increases in model performance, while land information products did not. While the novel convolutional neural network model did not show a large improvement over previous models, it retained the highest holistic measures such as area under the curve and average precision, indicating it is still a strong competitor to existing models. This introduction of the convolutional neural network approach expands the wealth of knowledge for the prediction of wildfire incidents and proves the usefulness of the novel, image-based dataset.
56

An Empirical Evaluation of Neural Process Meta-Learners for Financial Forecasting

Patel, Kevin G 01 June 2023 (has links) (PDF)
Challenges of financial forecasting, such as a dearth of independent samples and non- stationary underlying process, limit the relevance of conventional machine learning towards financial forecasting. Meta-learning approaches alleviate some of these is- sues by allowing the model to generalize across unrelated or loosely related tasks with few observations per task. The neural process family achieves this by con- ditioning forecasts based on a supplied context set at test time. Despite promise, meta-learning approaches remain underutilized in finance. To our knowledge, ours is the first application of neural processes to realized volatility (RV) forecasting and financial forecasting in general. We propose a hybrid temporal convolutional network attentive neural process (ANP- TCN) for the purpose of financial forecasting. The ANP-TCN combines a conven- tional and performant financial time series embedding model (TCN) with an ANP objective. We found ANP-TCN variant models outperformed the base TCN for equity index realized volatility forecasting. In addition, when stack-ensembled with a tree- based model to forecast a trading signal, the ANP-TCN outperformed the baseline buy-and-hold strategy and base TCN model in out-of-sample performance. Across four liquid US equity indices (incl. S&P 500) tested over ∼15 years, the best long-short models (reported by median trajectory) resulted in the following out-of-sample (∼3 years) performance ranges: directional accuracy of 58.65% to 62.26%, compound an- nual growth rate (CAGR) of 0.2176 to 0.4534, and annualized Sharpe ratio of 2.1564 to 3.3375. All project code can be found at: https://github.com/kpa28-git/thesis-code.
57

PREFERENCES: OPTIMIZATION, IMPORTANCE LEARNING AND STRATEGIC BEHAVIORS

Zhu, Ying 01 January 2016 (has links)
Preferences are fundamental to decision making and play an important role in artificial intelligence. Our research focuses on three group of problems based on the preference formalism Answer Set Optimization (ASO): preference aggregation problems such as computing optimal (near optimal) solutions, strategic behaviors in preference representation, and learning ranks (weights) for preferences. In the first group of problems, of interest are optimal outcomes, that is, outcomes that are optimal with respect to the preorder defined by the preference rules. In this work, we consider computational problems concerning optimal outcomes. We propose, implement and study methods to compute an optimal outcome; to compute another optimal outcome once the first one is found; to compute an optimal outcome that is similar to (or, dissimilar from) a given candidate outcome; and to compute a set of optimal answer sets each significantly different from the others. For the decision version of several of these problems we establish their computational complexity. For the second topic, the strategic behaviors such as manipulation and bribery have received much attention from the social choice community. We study these concepts for preference formalisms that identify a set of optimal outcomes rather than a single winning outcome, the case common to social choice. Such preference formalisms are of interest in the context of combinatorial domains, where preference representations are only approximations to true preferences, and seeking a single optimal outcome runs a risk of missing the one which is optimal with respect to the actual preferences. In this work, we assume that preferences may be ranked (differ in importance), and we use the Pareto principle adjusted to the case of ranked preferences as the preference aggregation rule. For two important classes of preferences, representing the extreme ends of the spectrum, we provide characterizations of situations when manipulation and bribery is possible, and establish the complexity of the problem to decide that. Finally, we study the problem of learning the importance of individual preferences in preference profiles aggregated by the ranked Pareto rule or positional scoring rules. We provide a polynomial-time algorithm that finds a ranking of preferences such that the ranked profile correctly decided all the examples, whenever such a ranking exists. We also show that the problem to learn a ranking maximizing the number of correctly decided examples is NP-hard. We obtain similar results for the case of weighted profiles.
58

CP-nets: From Theory to Practice

Allen, Thomas E. 01 January 2016 (has links)
Conditional preference networks (CP-nets) exploit the power of ceteris paribus rules to represent preferences over combinatorial decision domains compactly. CP-nets have much appeal. However, their study has not yet advanced sufficiently for their widespread use in real-world applications. Known algorithms for deciding dominance---whether one outcome is better than another with respect to a CP-net---require exponential time. Data for CP-nets are difficult to obtain: human subjects data over combinatorial domains are not readily available, and earlier work on random generation is also problematic. Also, much of the research on CP-nets makes strong, often unrealistic assumptions, such as that decision variables must be binary or that only strict preferences are permitted. In this thesis, I address such limitations to make CP-nets more useful. I show how: to generate CP-nets uniformly randomly; to limit search depth in dominance testing given expectations about sets of CP-nets; and to use local search for learning restricted classes of CP-nets from choice data.
59

Pedestrian Detection Using Basic Polyline: A Geometric Framework for Pedestrian Detection

Gongbo, Liang 01 April 2016 (has links)
Pedestrian detection has been an active research area for computer vision in recently years. It has many applications that could improve our lives, such as video surveillance security, auto-driving assistance systems, etc. The approaches of pedestrian detection could be roughly categorized into two categories, shape-based approaches and appearance-based approaches. In the literature, most of approaches are appearance-based. Shape-based approaches are usually integrated with an appearance-based approach to speed up a detection process. In this thesis, I propose a shape-based pedestrian detection framework using the geometric features of human to detect pedestrians. This framework includes three main steps. Give a static image, i) generating the edge image of the given image, ii) according to the edge image, extracting the basic polylines, and iii) using the geometric relationships among the polylines to detect pedestrians. The detection result obtained by the proposed framework is promising. There was a comparison made of this proposed framework with the algorithm which introduced by Dalal and Triggs [7]. This proposed algorithm increased the true-positive detection result by 47.67%, and reduced the false-positive detection number by 41.42%.
60

Vehicle to Vehicle Communication in Level 4 Autonomy

Hajimirsadeghi, Seyedsalar 01 January 2017 (has links)
With the number of deaths, commute time, and injuries constantly rising due to human driving errors, it’s time for a new transportation system, where humans are no longer involved in driving decisions and vehicles are the only machine that decide the actions of a vehicle. To accomplish a fully autonomous world, it’s important for vehicles to be able to communicate instantly and report their movements in order to reduce accidents. This paper discusses four approaches to vehicle to vehicle communication, as well as the underlying standards and technology that enable vehicles to accomplish communicating.

Page generated in 0.1675 seconds