• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5535
  • 1073
  • 768
  • 625
  • 541
  • 357
  • 145
  • 96
  • 96
  • 96
  • 96
  • 96
  • 96
  • 95
  • 84
  • Tagged with
  • 11530
  • 6080
  • 2558
  • 2006
  • 1684
  • 1419
  • 1360
  • 1318
  • 1217
  • 1137
  • 1075
  • 1039
  • 1014
  • 892
  • 886
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
621

Contribution à l’analyse des facteurs explicatifs de la performance des commerciaux en matière de veille marketing : esquisse d'un cadre conceptuel / Contribution to the analysis of the explanotary factors of salespeople's performance in terms of marketing intelligence : outline of a conceptual framework

Majd, Thomas 09 September 2015 (has links)
Face à la concurrence de plus en acharnée, aux clients de mieux en mieux informés et aux produits qui se banalisent de jour en jour, il est de plus en plus difficile pour les entreprises de vendre leurs produits et services et de conserver leurs avantages concurrentiels durablement. D’où la nécessité pour elles de disposer d’une bonne perception des évolutions, des mouvements et des pratiques des principaux acteurs de leur environnement. Des outils tels l’intelligence économique et la veille marketing permettent de relever les défis susmentionnés. Parmi les acteurs susceptibles d’assumer un rôle important dans ce domaine figure la force de vente, interface entre le marché et l’entreprise. Bien que le commercial soit de plus en plus considéré comme un véritable vecteur d’information de terrain, l’étude des facteurs qui déterminent sa performance dans ce domaine est encore peu développée. Dans ce contexte, l’objet de cette thèse est de proposer un modèle analysant les principaux facteurs susceptibles d’influencer la performance des commerciaux en matière de veille marketing. L’hypothèse à la base de ce modèle s’appuie sur l’existence de deux grandes catégories de facteurs susceptibles de favoriser la performance des commerciaux en matière de transmission régulière des informations de terrain (veille marketing) : d’une part, des facteurs spécifiques aux commerciaux et, d’autre part, des facteurs liés au management de la force de vente. / In the context of ever stiffer competition, ever better informed customers, and products becoming ever more undistinguishable, it is increasingly difficult for companies to sell their products and services and maintain a sustainable competitive advantage. Hence the need for them to have access to a proper perception of the evolutions, movements and practices of the main actors in their environment. Tools such as economic intelligence and market intelligence make it possible to respond to the aforementioned challenges. Among the actors likely to take on an important role in this field is the sales force, as an interface between the market and the company. Although salespeople are increasingly considered as veritable vectors of information in the field, the study of the factors that determine their performance in that sense has received little attention until now. In this context, the objective of this thesis is to propose a model analyzing the main factors likely to influence the performance of sales people in terms of market intelligence. The basic hypothesis of this model relies on the existence of two main categories of factors likely to favor the performance of sales people in terms of passing on field information on a regular basis (market intelligence): first, factors which are specific to salespeople, and second, factors linked to the management of the sales force.
622

Cross-Lingual Word Sense Disambiguation for Low-Resource Hybrid Machine Translation

Rudnick, Alexander James 08 January 2019 (has links)
<p> This thesis argues that cross-lingual word sense disambiguation (CL-WSD) can be used to improve lexical selection for machine translation when translating from a resource-rich language into an under-resourced one, especially when relatively little bitext is available. In CL-WSD, we perform word sense disambiguation, considering the senses of a word to be its possible translations into some target language, rather than using a sense inventory developed manually by lexicographers. </p><p> Using explicitly trained classifiers that make use of source-language context and of resources for the source language can help machine translation systems make better decisions when selecting target-language words. This is especially the case when the alternative is hand-written lexical selection rules developed by researchers with linguistic knowledge of the source and target languages, but also true when lexical selection would be performed by a statistical machine translation system, when there is a relatively small amount of available target-language text for training language models. </p><p> In this work, I present the Chipa system for CL-WSD and apply it to the task of translating from Spanish to Guarani and Quechua, two indigenous languages of South America. I demonstrate several extensions to the basic Chipa system, including techniques that allow us to benefit from the wealth of available unannotated Spanish text and existing text analysis tools for Spanish, as well as approaches for learning from bitext resources that pair Spanish with languages unrelated to our intended target languages. Finally, I provide proof-of-concept integrations of Chipa with existing machine translation systems, of two completely different architectures.</p><p>
623

An intelligent real-time lift scheduling system

Hamdi, Muna January 1999 (has links)
In modem high-rise buildings, a suitable control algorithm has to be chosen so that lifts can respond to passenger requests in such a way as to transport them quickly and efficiently to their destinations. The aim of the current work is to assess new scheduling approaches and intelligent monitoring techniques in order to aid the design of new lift systems and to improve the performance of existing installations. To achieve this, the project has been divided into three major parts. Firstly, a model of passenger movements has been developed from an analysis of data gathered from installed lift systems, thereby allowing the realistic simulation of landing calls, car calls and door opening times. Secondly, a lift simulator has been produced to allow the modular comparison of alternative scheduling and monitoring approaches and to provide an accurate model of lift dynamics. Thirdly, a new intelligent lift scheduling system has been implemented.
624

How can the combination of intelligence, innovation and interfacing help take a hi-tech SME to market? : a case study

James, Juli January 2014 (has links)
This study interrogates the way in which marketing is carried out, or not carried out within a high technology based small business. The business in question was the basis of a case study undertaken over an 18-month period. This researcher aimed to examine how the combination of entrepreneurial, innovative and relationship marketing can be used to successfully go to market and increase turnover. This PhD project aimed to develop both a theoretical conceptual model and a practice-based application of the model for application in the case study company. The research undertaken combined multiple methods of approach including autoethnographic participant observation, in-depth responsive interviews and inquiry from within the case study company as the researcher was fully embedded in the business for 18 months. This combination of these methods enabled an immersive and robust case study, which sat inline with the interpretivist approach allowing the researcher to gain an insight into the social construction of meaning in relation to the marketing practice of the high technology based small business and how such meaning can be used to inform ‘contextual’ theory development. The case study was then examined and explained using an autoenthographic narrative and positioned in three phases to show the progression from no formal marketing marketing to administrative marketing methods through to innovative and entrepreneurial marketing strategies and techniques. The outcome of the case study research has resulted in an adapted practicebased model. This research has informed the development of the 3I’s model and started to show that the weighting of each section may not be equal. The model suggests that market intelligence, marketing innovation and human interfacing used in the correct combination, as suggested by the researcher, can help high technology based small businesses and ultimately businesses in general go to market successfully.
625

Towards Fast and Efficient Representation Learning

Li, Hao 05 October 2018 (has links)
<p> The success of deep learning and convolutional neural networks in many fields is accompanied by a significant increase in the computation cost. With the increasing model complexity and pervasive usage of deep neural networks, there is a surge of interest in fast and efficient model training and inference on both cloud and embedded devices. Meanwhile, understanding the reasons for trainability and generalization is fundamental for its further development. This dissertation explores approaches for fast and efficient representation learning with a better understanding of the trainability and generalization. In particular, we ask following questions and provide our solutions: 1) How to reduce the computation cost for fast inference? 2) How to train low-precision models on resources-constrained devices? 3) What does the loss surface looks like for neural nets and how it affects generalization?</p><p> To reduce the computation cost for fast inference, we propose to prune filters from CNNs that are identified as having a small effect on the prediction accuracy. By removing filters with small norms together with their connected feature maps, the computation cost can be reduced accordingly without using special software or hardware. We show that simple filter pruning approach can reduce the inference cost while regaining close to the original accuracy by retraining the networks.</p><p> To further reduce the inference cost, quantizing model parameters with low-precision representations has shown significant speedup, especially for edge devices that have limited computing resources, memory capacity, and power consumption. To enable on-device learning on lower-power systems, removing the dependency of full-precision model during training is the key challenge. We study various quantized training methods with the goal of understanding the differences in behavior, and reasons for success or failure. We address the issue of why algorithms that maintain floating-point representations work so well, while fully quantized training methods stall before training is complete. We show that training algorithms that exploit high-precision representations have an important greedy search phase that purely quantized training methods lack, which explains the difficulty of training using low-precision arithmetic.</p><p> Finally, we explore the structure of neural loss functions, and the effect of loss landscapes on generalization, using a range of visualization methods. We introduce a simple filter normalization method that helps us visualize loss function curvature, and make meaningful side-by-side comparisons between loss functions. The sharpness of minimizers correlates well with generalization error when this visualization is used. Then, using a variety of visualizations, we explore how training hyper-parameters affect the shape of minimizers, and how network architecture affects the loss landscape.</p><p>
626

Business Intelligence na portálu Studentreality

Škorpík, Jakub January 2015 (has links)
This diploma thesis deals with the creation of module for the company Student-living, s.r.o. designed to simplify the analysis and interpretation of the data contained in the database of the real estate portal Studentreality. This module will use the company's management and staff, as well as property owners who advertise on the portal. Before the creation precedes familiarization with the current system, database and technology that will be used for implementation. Followed by the main part describing the design and implementation of the module itself.
627

The virtual participant : story telling in a computer supported collaborative learning environment

Masterton, Simon J. January 1999 (has links)
This thesis presents a study of a novel approach for supporting students in text based electronic conferencing. It describes the development of a concept known as the Virtual Participant. An initial prototype was developed which was tested on the Open University Business School MBA course on Creative Management. The Virtual Participant first presented itself to the users as Uncle Bulgaria. a metaphor for collecting and recycling important information. The Virtual Participant approach is to store the discussions students have had in previous years that the course has run. and to retrieve those discussions at a time most appropriate to helping the students studying this year. It was never intended to provide 'the answer' but rather examples of similar discussions on similar topics. Uncle Bulgaria interacted with the students over a period of 16 weeks. during which time the students prepared two assignments and completed the first half of the course. The information gained from the students' interactions with the system and their feedback to a questionnaire survey was then fed back into a second prototype' which was again tested on the same course. In the second study the system was known to the students as the Active Archive. an active component of an archive of past student discussions. Through cross year comparisons it was possible to evaluate the improvements made between the Active Archive and Uncle Bulgaria systems. The Active Archive interacted with the students on a much larger scale than Uncle Bulgaria had. but with no increased negative impact. The second study provided examples where the Active Archive stimulated discussion amongst the students and vicarious learning could be said to have taken place. Taking the lessons learned from these two studies a number of guidelines for the development of such systems have been produced and are described and discussed.
628

Deep Networks for Forward Prediction and Planning

Henaff, Mikael 17 November 2018 (has links)
<p> Learning to predict how an environment will evolve and the consequences of one&rsquo;s actions is an important ability for autonomous agents, and can enable planning with relatively few interactions with the environment which may be slow or costly. However, learning an accurate forward model is often difficult in practice due to several features often present in complex environments. First, many environments exhibit long-term dependencies which require the system to learn to record and maintain relevant information in its memory over long timescales. Second, the envi- ronment may only be partially observed, and the aspects of the environment which are observed may depend on parts of the environment which are hidden. Third, many observed processes contain some form of apparent or inherent stochasticity, which makes the task of predicting future states ill-defined. </p><p> In this thesis, we propose approaches to tackle and better understand these different challenges associated with learning predictive models of the environment and using them for planning. We first provide an analysis of recurrent neural network (RNN) memory, which sheds light on the mechanisms by which RNNs are able to store different types of information in their memory over long timescales through the analysis of two synthetic benchmark tasks. We then introduce a new neural network architecture which keeps an estimate of the state of the environment in its memory, and can deal with partial observability by reasoning based on what is observed. We next present a new method for performing planning using a learned model of the environment with both discrete and continuous actions. Finally, we propose an approach for model-based planning in the presence of both environment uncertainty and model uncertainty, and evaluate it on a new real-world dataset and environment with applications to autonomous driving.</p><p>
629

Evaluation and Design of Robust Neural Network Defenses

Carlini, Nicholas 21 November 2018 (has links)
<p>Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to test-time evasion attacks adversarial examples): inputs specifically designed by an adversary to cause a neural network to misclassify them. This makes applying neural networks in security-critical areas concerning. In this dissertation, we introduce a general framework for evaluating the robustness of neural network through optimization-based methods. We apply our framework to two different domains, image recognition and automatic speech recognition, and find it provides state-of-the-art results for both. To further demonstrate the power of our methods, we apply our attacks to break 14 defenses that have been proposed to alleviate adversarial examples. We then turn to the problem of designing a secure classifier. Given this apparently-fundamental vulnerability of neural networks to adversarial examples, instead of taking an existing classifier and attempting to make it robust, we construct a new classifier which is provably robust by design under a restricted threat model. We consider the domain of malware classification, and construct a neural network classifier that is can not be fooled by an insertion adversary, who can only insert new functionality, and not change existing functionality. We hope this dissertation will provide a useful starting point for both evaluating and constructing neural networks robust in the presence of an adversary.
630

Assessing the Correlation Between Scores of Intelligence and the PEAK-Generalization Module

Morrissey, Joanna Marie 01 December 2016 (has links)
The present study sought to compare the relationship between the generalization skills and performance on a standardized IQ assessment on 30 individuals with developmental or intellectual disabilities (73% had a diagnosis of autism). Participants’ generalization skills were tested using the Promoting the Emergence of Advanced Knowledge Generalization Module (PEAK-G), and IQ was assessed using either the WISC-IV Short Form assessment or the WPPSI-III Short Form assessment. The data indicated a strong, significant correlation between scores on the PEAK-G and IQ using both Raw IQ (r = .839, p > .01) and Full Scale IQ (r = .628, p > .01). Both Raw IQ and Full Scale IQ were further analyzed by comparing them each to the three subtests of the PEAK-G (Foundational Learning and Basic Social Skills, Verbal Comprehension, Memory and Advanced Social Skills and Verbal Reasoning, Problem Solving, and Advanced Mathematical Skills). The results help to provide a better understanding of how closely participants’ IQ scores correlate to their PEAK-G scores.

Page generated in 0.1106 seconds