301 |
Cross-functional interaction during the early phases of user-centered software new product development: reconsidering the common area of interestMolin-Juustila, T. (Tonja) 25 April 2006 (has links)
Abstract
Applying the principles of user-centered development (UCD) in software development practice is not straightforward. In technology-push type software product development it is not clear how to match the new product innovation to the future needs of potential future users. Intensive collaboration between different organizational functions becomes essential. UCD provides valuable tools and practices as learning mechanisms both for users and for the company. The purpose of cross-functional interaction is to iteratively define the best possible market for the emerging new product. This study investigates cross-functional interaction during the early phases of a new software product. The roots of UCD are in traditional software engineering (SE). However, in a software product company it is necessary to take a broader new product development (NPD) perspective.
The results indicate that the early phases of software NPD are actually a collaborative learning process in which representations of the new product are built iteratively, increasing multidisciplinary knowledge related to the evolving shared object of development. The cross-functionally shared object is more than the new software product. It is an emerging new vision for the whole new business area. Both the product and its users-customers-market develop iteratively. Traditionally this is considered to happen through communication within a cross-functional NPD team. Rather than one cross-functional team effort, software NPD seems to be a network of cross-functional activities. Furthermore, in software NPD practice the development of the new business unit may actually overlay the more established business organization. This has not been visible enough, and part of the problems with cross-functional interaction may be due to confusion between these two activity systems during every-day practices. Different mediating representations of the multidimensional object knowledge become crucial.
The study starts with a summary of a three-year process improvement effort in one case company, providing the basis for theoretical reflections and analytical generalizations. SE and NPD literature is reviewed to situate the case within current theoretical understanding. The findings are synthesized using concepts from cultural-historical activity theory. This study will hopefully provoke the rethinking of some of the current taken-for-granted issues related to the management of new emerging software product businesses.
|
302 |
Completion of Ontologies and Ontology NetworksDragisic, Zlatan January 2017 (has links)
The World Wide Web contains large amounts of data, and in most cases this data has no explicit structure. The lack of structure makes it difficult for automated agents to understand and use such data. A step towards a more structured World Wide Web is the Semantic Web, which aims at introducing semantics to data on the World Wide Web. One of the key technologies in this endeavour are ontologies, which provide a means for modeling a domain of interest and are used for search and integration of data. In recent years many ontologies have been developed. To be able to use multiple ontologies it is necessary to align them, i.e., find inter-ontology relationships. However, developing and aligning ontologies is not an easy task and it is often the case that ontologies and their alignments are incorrect and incomplete. This can be a problem for semantically-enabled applications. Incorrect and incomplete ontologies and alignments directly influence the quality of the results of such applications, as wrong results can be returned and correct results can be missed. This thesis focuses on the problem of completing ontologies and ontology networks. The contributions of the thesis are threefold. First, we address the issue of completing the is-a structure and alignment in ontologies and ontology networks. We have formalized the problem of completing the is-a structure in ontologies as an abductive reasoning problem and developed algorithms as well as systems for dealing with the problem. With respect to the completion of alignments, we have studied system performance in the Ontology Alignment Evaluation Initiative, a yearly evaluation campaign for ontology alignment systems. We have also addressed the scalability of ontology matching, which is one of the current challenges, by developing an approach for reducing the search space when generating the alignment.Second, high quality completion requires user involvement. As users' time and effort are a limited resource we address the issue of limiting and facilitating user interaction in the completion process. We have conducted a broad study of state-of-the-art ontology alignment systems and identified different issues related to the process. We have also conducted experiments to assess the impact of user errors in the completion process. While the completion of ontologies and ontology networks can be done at any point in the life-cycle of ontologies and ontology networks, some of the issues can be addressed already in the development phase. The third contribution of the thesis addresses this by introducing ontology completion and ontology alignment into an existing ontology development methodology.
|
303 |
Neural Representation Learning for Semi-Supervised Node Classification and ExplainabilityHogun Park (9179561) 28 July 2020 (has links)
<div>Many real-world domains are relational, consisting of objects (e.g., users and pa- pers) linked to each other in various ways. Because class labels in graphs are often only available for a subset of the nodes, semi-supervised learning for graphs has been studied extensively to predict the unobserved class labels. For example, we can pre- dict political views in a partially labeled social graph dataset and get expected gross incomes of movies in an actor/movie graph with a few labels. Recently, advances in representation learning for graph data have made great strides for the semi-supervised node classification. However, most of the methods have mainly focused on learning node representations by considering simple relational properties (e.g., random walk) or aggregating nearby attributes, and it is still challenging to learn complex inter- action patterns in partially labeled graphs and provide explanations on the learned representations. </div><div><br></div><div>In this dissertation, multiple methods are proposed to alleviate both challenges for semi-supervised node classification. First, we propose a graph neural network architecture, REGNN, that leverages local inferences for unlabeled nodes. REGNN performs graph convolution to enable label propagation via high-order paths and predicts class labels for unlabeled nodes. In particular, our proposed attention layer of REGNN measures the role equivalence among nodes and effectively reduces the noise, which is generated during the aggregation of observed labels from distant neighbors at various distances. Second, we also propose a neural network archi- tecture that jointly captures both temporal and static interaction patterns, which we call Temporal-Static-Graph-Net (TSGNet). The architecture learns a latent rep- resentation of each node in order to encode complex interaction patterns. Our key insight is that leveraging both a static neighbor encoder, that learns aggregate neigh- bor patterns, and a graph neural network-based recurrent unit, that captures complex interaction patterns, improves the performance of node classification. Lastly, in spite of better performance of representation learning on node classification tasks, neural network-based representation learning models are still less interpretable than the pre- vious relational learning models due to the lack of explanation methods. To address the problem, we show that nodes with high bridgeness scores have larger impacts on node embeddings such as DeepWalk, LINE, Struc2Vec, and PTE under perturbation. However, it is computationally heavy to get bridgeness scores, and we propose a novel gradient-based explanation method, GRAPH-wGD, to find nodes with high bridgeness efficiently. In our evaluations, our proposed architectures (REGNN and TSGNet) for semi-supervised node classification consistently improve predictive performance on real-world datasets. Our GRAPH-wGD also identifies important nodes as global explanations, which significantly change both predicted probabilities on node classification tasks and k-nearest neighbors in the embedding space after perturbing the highly ranked nodes and re-learning low-dimensional node representations for DeepWalk and LINE embedding methods.</div>
|
304 |
TOWARDS TIME-AWARE COLLABORATIVE FILTERING RECOMMENDATION SYSTEMDawei Wang (9216029) 12 October 2021 (has links)
<div><div><div><p>As technological capacity to store and exchange information progress, the amount of available data grows explosively, which can lead to information overload. The dif- ficulty of making decisions effectively increases when one has too much information about that issue. Recommendation systems are a subclass of information filtering systems that aim to predict a user’s opinion or preference of topic or item, thereby providing personalized recommendations to users by exploiting historic data. They are widely used in e-commerce such as Amazon.com, online movie streaming com- panies such as Netflix, and social media networks such as Facebook. Memory-based collaborative filtering (CF) is one of the recommendation system methods used to predict a user’s rating or preference by exploring historic ratings, but without in- corporating any content information about users or items. Many studies have been conducted on memory-based CFs to improve prediction accuracy, but none of them have achieved better prediction accuracy than state-of-the-art model-based CFs. Fur- thermore, A product or service is not judged only by its own characteristics but also by the characteristics of other products or services offered concurrently. It can also be judged by anchoring based on users’ memories. Rating or satisfaction is viewed as a function of the discrepancy or contrast between expected and obtained outcomes documented as contrast effects. Thus, a rating given to an item by a user is a compar- ative opinion based on the user’s past experiences. Therefore, the score of ratings can be affected by the sequence and time of ratings. However, in traditional CFs, pairwise similarities measured between items do not consider time factors such as the sequence of rating, which could introduce biases caused by contrast effects. In this research, we proposed a new approach that combines both structural and rating-based similarity measurement used in memory-based CFs. We found that memory-based CF using combined similarity measurement can achieve better prediction accuracy than model-based CFs in terms of lower MAE and reduce memory and time by using less neighbors than traditional memory-based CFs on MovieLens and Netflix datasets. We also proposed techniques to reduce the biases caused by those user comparing, anchoring and adjustment behaviors by introducing the time-aware similarity measurements used in memory-based CFs. At last, we introduced novel techniques to identify, quantify, and visualize user preference dynamics and how it could be used in generating dynamic recommendation lists that fits each user’s current preferences.</p></div></div></div>
|
305 |
Multivariate Information MeasuresXueyan Niu (11850761) 18 December 2021 (has links)
<div>Many important scientific, engineering, and societal challenges involve large systems of individual agents or components interacting in complex ways. For example, to understand the emergence of consciousness, we study the dendritic integration in neurons; to prevent disease and rumor outbreaks, we trace the dynamics of social networks; to perform complicated scientific experiments, we separate and control the independent variables. Collectively, the interactions between individual neurons/agents/variables are often non-linear, i.e., a subset of the agents jointly behave in a manner unlike the marginal behaviors of the individuals.</div><div><br></div><div>The goal of this thesis is to construct a theoretical framework for measuring, comparing, and representing complex interactions in stochastic systems. Specifically, tools from information theory, differential geometry, lattice theory, and linear algebra are used to identify and characterize higher-order interactions among random variables.</div><div><br></div><div>We first propose measures of unique, redundant, and synergistic interactions for small stochastic systems using information projections for the exponential family. Their magnitudes are endowed with information theoretical meanings naturally, since they are measured by the Kullback-Leibler divergence. We prove that these quantities satisfy various desired properties.</div><div><br></div><div>We next apply these measures to hypothesis testing and network communication. We interpret the unique information as the two types of error components in a hypothesis testing problem. We analytically show that there is a duality between the synergistic and redundant information in Gaussian Multiple Access Channels (MAC) and Broadcast Channels (BC). We establish a novel duality between the partial information decomposition components for MAC and BC in the general case.</div><div><br></div><div>We lastly propose a new concept of representing the partial information decomposition framework with random variables. We give necessary and sufficient conditions for the representation under the assumption of Gaussianity and develop a construction method.</div><div><br></div><div>This research has the potential to advance the fields of information theory, statistics, and machine learning by contributing novel ideas, implementing these ideas with innovative tools, and constructing new simulation methods.</div>
|
306 |
DECEPTIVE REVIEW IDENTIFICATION VIA REVIEWER NETWORK REPRESENTATION LEARNINGShih-Feng Yang (11502553) 19 December 2021 (has links)
<div><div>With the growth of the popularity of e-commerce and mobile apps during the past decade, people rely on online reviews more than ever before for purchasing products, booking hotels, and choosing all kinds of services. Users share their opinions by posting product reviews on merchant sites or online review websites (e.g., Yelp, Amazon, TripAdvisor). Although online reviews are valuable information for people who are interested in products and services, many reviews are manipulated by spammers to provide untruthful information for business competition. Since deceptive reviews can damage the reputation of brands and mislead customers’ buying behaviors, the identification of fake reviews has become an important topic for online merchants. Among the computational approaches proposed for fake review identification, network-based fake review analysis jointly considers the information from review text, reviewer behaviors, and production information. Researchers have proposed network-based methods (e.g., metapath) on heterogeneous networks, which have shown promising results.</div><div><br></div><div>However, we’ve identified two research gaps in this study: 1) We argue the previous network-based reviewer representations are not sufficient to preserve the relationship of reviewers in networks. To be specific, previous studies only considered first-order proximity, which indicates the observable connection between reviewers, but not second-order proximity, which captures the neighborhood structures where two vertices overlap. Moreover, although previous network-based fake review studies (e.g., metapath) connect reviewers through feature nodes across heterogeneous networks, they ignored the multi-view nature of reviewers. A view is derived from a single type of proximity or relationship between the nodes, which can be characterized by a set of edges. In other words, the reviewers could form different networks with regard to different relationships. 2) The text embeddings of reviews in previous network-based fake review studies were not considered with reviewer embeddings.</div><div><br></div><div>To tackle the first gap, we generated reviewer embeddings via MVE (Qu et al., 2017), a framework for multi-view network representation learning, and conducted spammer classification experiments to examine the effectiveness of the learned embeddings for distinguishing spammers and non-spammers. In addition, we performed unsupervised hierarchical clustering to observe the clusters of the reviewer embeddings. Our results show the clusters generated based on reviewer embeddings capture the difference between spammers and non-spammers better than those generated based on reviewers’ features.</div><div><br></div><div>To fill the second gap, we proposed hybrid embeddings that combine review text embeddings with reviewer embeddings (i.e., the vector that represents a reviewer’s characteristics, such as writing or behavioral patterns). We conducted fake review classification experiments to compare the performance between using hybrid embeddings (i.e., text+reviewer) as features and using text-only embeddings as features. Our results suggest that hybrid embedding is more effective than text-only embedding for fake review identification. Moreover, we compared the prediction performance of the hybrid embeddings with baselines and showed our approach outperformed others on fake review identification experiments.</div><div><br></div><div>The contributions of this study are four-fold: 1) We adopted a multi-view representation learning approach for reviewer embedding learning and analyze the efficacy of the embeddings used for spammer classification and fake review classification. 2) We proposed a hybrid embedding that considers the characteristics of both review text and the reviewer. Our results are promising and suggest hybrid embedding is very effective for fake review identification. 3) We proposed a heuristic network construction approach that builds a user network based on user features. 4) We evaluated how different spammer thresholds impact the performance of fake review classification. Several studies have used the same datasets as we used in this study, but most of them followed the spammer definition mentioned by Jindal and Liu (2008). We argued that the spammer definition should be configurable based on different datasets. Our findings showed that by carefully choosing the spammer thresholds for the target datasets, hybrid embeddings have higher efficacy for fake review classification.</div></div>
|
307 |
Feature Fusion Deep Learning Method for Video and Audio Based Emotion RecognitionYanan Song (11825003) 20 December 2021 (has links)
In this thesis, we proposed a deep learning based emotion recognition system in order
to improve the successive classification rate. We first use transfer learning to extract visual
features and use Mel frequency Cepstral Coefficients(MFCC) to extract audio features, and
then apply the recurrent neural networks(RNN) with attention mechanism to process the
sequential inputs. After that, the outputs of both channels are fused into a concatenate layer,
which is processed using batch normalization, to reduce internal covariate shift. Finally, the
classification result is obtained by the softmax layer. From our experiments, the video and
audio subsystem achieve 78% and 77% respectively, and the feature fusion system with
video and audio achieves 92% accuracy based on the RAVDESS dataset for eight emotion
classes. Our proposed feature fusion system outperforms conventional methods in terms of
classification prediction.
|
308 |
ADVANCED INDOOR THERMAL ENVIRONMENT CONTROL USING OCCUPANT’S MEAN FACIAL SKIN TEMPERATURE AND CLOTHING LEVELXuan Li (8731800) 20 April 2020 (has links)
<div>
<p>People spend most of their time indoors. Because people’s
health and productivity are highly dependent on the quality of the indoor
thermal environment, it is important to provide occupants with healthy,
comfortable and productive indoor thermal environment. However, inappropriate
thermostat temperature setpoint settings not only wasted large amount of energy
but also make occupants less comfortable. This study intended to develop a new
control strategy for HVAC systems to adjust the thermostat setpoint
automatically and accordingly to provide a more comfortable and satisfactory
thermal environment.</p>
<p>This study first trained an image classification model
based on CNN to classify occupants’ amount of clothing insulation (clothing
level). Because clothing level was related to human thermal comfort, having
this information was helpful when determining the temperature setpoint. By
using this method, this study performed experimental study to collect
comfortable air temperature for different clothing levels. This study collected
450 data points from college student. By using the data points, this study
developed an empirical curve which could be used to calculate comfortable air
temperature for specific clothing level. The results obtained by using this curve
could provide environments that had small average dissatisfaction and average
thermal sensation closed to neutral.</p>
<p>To adjust the setpoint temperature according to
occupants’ thermal comfort, this study used mean facial skin temperature as an
indicator to determine the thermal comfort. Because when human feel hot, their
body temperature would rise and vice versa. To determine the correlation, we
used a long wave infrared (LWIR) camera to non-invasively obtain occupant’s
facial thermal map. By processing the thermal map with Haar-cascade face
detection program, occupant’s mean facial skin temperature was calculated. By
using this method, this study performed experimental study to collect
occupant’s mean facial skin temperature under different thermal environment.
This study collected 225 data points
from college students. By using the data points, this study discovered
different intervals of mean facial skin temperature under different thermal
environment. </p>
<p>Lastly, this study used the data collected from
previous two investigations and developed a control platform as well as the
control logic for a single occupant office to achieve the objective. The
measured clothing level using image classification was used to determine the
temperature setpoint. According to the measured mean facial skin temperature,
the setpoint could be further adjusted automatically to make occupant more
comfortable. This study performed 22 test sessions to validate the new control
strategy. The results showed 91% of the tested subjects felt neutral in the
office</p>
</div>
<br>
|
309 |
Explainable AI in Workflow Development and Verification Using Pi-CalculusJanuary 2020 (has links)
abstract: Computer science education is an increasingly vital area of study with various challenges that increase the difficulty level for new students resulting in higher attrition rates. As part of an effort to resolve this issue, a new visual programming language environment was developed for this research, the Visual IoT and Robotics Programming Language Environment (VIPLE). VIPLE is based on computational thinking and flowchart, which reduces the needs of memorization of detailed syntax in text-based programming languages. VIPLE has been used at Arizona State University (ASU) in multiple years and sections of FSE100 as well as in universities worldwide. Another major issue with teaching large programming classes is the potential lack of qualified teaching assistants to grade and offer insight to a student’s programs at a level beyond output analysis.
In this dissertation, I propose a novel framework for performing semantic autograding, which analyzes student programs at a semantic level to help students learn with additional and systematic help. A general autograder is not practical for general programming languages, due to the flexibility of semantics. A practical autograder is possible in VIPLE, because of its simplified syntax and restricted options of semantics. The design of this autograder is based on the concept of theorem provers. To achieve this goal, I employ a modified version of Pi-Calculus to represent VIPLE programs and Hoare Logic to formalize program requirements. By building on the inference rules of Pi-Calculus and Hoare Logic, I am able to construct a theorem prover that can perform automated semantic analysis. Furthermore, building on this theorem prover enables me to develop a self-learning algorithm that can learn the conditions for a program’s correctness according to a given solution program. / Dissertation/Thesis / Doctoral Dissertation Computer Science 2020
|
310 |
A learning framework for zero-knowledge game playing agentsDuminy, Willem Harklaas 17 October 2007 (has links)
The subjects of perfect information games, machine learning and computational intelligence combine in an experiment that investigates a method to build the skill of a game-playing agent from zero game knowledge. The skill of a playing agent is determined by two aspects, the first is the quantity and quality of the knowledge it uses and the second aspect is its search capacity. This thesis introduces a novel representation language that combines symbols and numeric elements to capture game knowledge. Insofar search is concerned; an extension to an existing knowledge-based search method is developed. Empirical tests show an improvement over alpha-beta, especially in learning conditions where the knowledge may be weak. Current machine learning techniques as applied to game agents is reviewed. From these techniques a learning framework is established. The data-mining algorithm, ID3, and the computational intelligence technique, Particle Swarm Optimisation (PSO), form the key learning components of this framework. The classification trees produced by ID3 are subjected to new post-pruning processes specifically defined for the mentioned representation language. Different combinations of these pruning processes are tested and a dominant combination is chosen for use in the learning framework. As an extension to PSO, tournaments are introduced as a relative fitness function. A variety of alternative tournament methods are described and some experiments are conducted to evaluate these. The final design decisions are incorporated into the learning frame-work configuration, and learning experiments are conducted on Checkers and some variations of Checkers. These experiments show that learning has occurred, but also highlights the need for further development and experimentation. Some ideas in this regard conclude the thesis. / Dissertation (MSc)--University of Pretoria, 2007. / Computer Science / MSc / Unrestricted
|
Page generated in 0.1406 seconds