• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1049
  • 402
  • 315
  • 176
  • 112
  • 106
  • 37
  • 34
  • 29
  • 24
  • 17
  • 14
  • 13
  • 12
  • 7
  • Tagged with
  • 2742
  • 786
  • 530
  • 324
  • 320
  • 296
  • 254
  • 248
  • 231
  • 225
  • 220
  • 219
  • 195
  • 180
  • 173
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
491

Usable, Secure Content-Based Encryption on the Web

Ruoti, Scott 01 July 2016 (has links)
Users share private information on the web through a variety of applications, such as email, instant messaging, social media, and document sharing. Unfortunately, recent revelations have shown that not only is users' data at risk from hackers and malicious insiders, but also from government surveillance. This state of affairs motivates the need for users to be able to encrypt their online data.In this dissertation, we explore how to help users encrypt their online data, with a special focus on securing email. First, we explore the design principles that are necessary to create usable, secure email. As part of this exploration, we conduct eight usability studies of eleven different secure email tools including a total of 347 participants. Second, we develop a novel, paired-participant methodology that allows us to test whether a given secure email system can be adopted in a grassroots fashion. Third, we apply our discovered design principles to PGP-based secure email, and demonstrate that these principles are sufficient to create the first PGP-based system that is usable by novices. We have also begun applying the lessons learned from our secure email research more generally to content-based encryption on the web. As part of this effort, we develop MessageGuard, a platform for accelerating research into usable, content-based encryption. Using MessageGuard, we build and evaluate Private Facebook Chat (PFC), a secure instant messaging system that integrates with Facebook Chat. Results from our usability analysis of PFC provided initial evidence that our design principles are also important components to usable, content-based encryption on the Web.
492

End user software engineering features for both genders

Sorte, Shraddha 17 October 2005 (has links)
Graduation date: 2006 / Previous research has revealed gender differences that impact females’ willingness to adopt software features in end users’ programming environments. Since these features have separately been shown to help end users problem solve, it is important to female end users’ productivity that we find ways to make these features more acceptable to females. This thesis draws from our ongoing work with users to help inform our design of theory-based methods for encouraging effective feature usage by both genders. This design effort is the first to begin addressing the gender differences in the ways that people go about problem solving in end-user programming situations.
493

Development and simulation of hard real-time switched-ethernet avionics data network

Chen, Tao 08 1900 (has links)
The computer and microelectronics technologies are developing very quickly nowadays. In the mean time, the modern integrated avionics systems are burgeoning unceasingly. The modern integrated modular architecture more and more requires the low-latency and reliable communication databus with the high bandwidth. The traditional avionics databus technology, such as ARINC429, can not provide enough high speed and size for data communication, and it is a problem to achieve transmission mission successfully between the advanced avionic devices with the sufficient bandwidth. AFDX(Avionics Full Duplex Switched Ethernet) is a good solution for this problem, which is the high-speed full duplex switched avionic databus, on the basis of the Ethernet technology. AFDX can not only avoid Ethernet conflicts and collisions, but also increase transmission rate with a lower weigh of the databus. AFDX is now adopted by A380,B787 aircraft successfully. The avionics data must be delivered punctualy and reliablely, so it is very essential to validate the real-time performance of AFDX during the design process. The simulation is a good method to acquire the network performance, but it only happends in some given set of scenarios, and it is impossible to consider every case. So a sophisticatd network performance method for the worst-case scenario with the pessimistic upper bound requires to be deduced. The avionic design engineers have launched many researches in the AFDX simulation and methods study. That is the goal that this thesis is aimming for. The development of this project can been planned in the following two steps. In the first step, a communication platform plans to be implemented to simulate the AFDX network in two versions – the RTAI realtime framework and Linux user space framework. Ultimately, these frameworks need to be integrated into net-ASS, which is an integrated simulation and assessment platform in the cranfield’s lab.The second step deduces an effective method to evaluate network performance, including three bounds(delay,backlog and output flow), based on the NC. It is called Network Calculus. It is an internet theory keeping the network system in determistic way. It is also used in communication queue management. This mathematics method is planed to be verified with simulation results from the AFDX simuation communication platform, in order to assure its validity and applicability. All in all, the project aims to assess the performance of different network topologies in different avionic architectures, through the simulation and the mathematical assessment. The technologies used in this thesis benefit to find problems and faults in the beginning stage of the avionics architecture design in the industrial project, especially, in terms of guarantee the lossless service in avionics databus.
494

Analysis of Passive End-to-End Network Performance Measurements

Simpson, Charles Robert, Jr. 02 January 2007 (has links)
NETI@home, a distributed network measurement infrastructure to collect passive end-to-end network measurements from Internet end-hosts was developed and discussed. The data collected by this infrastructure, as well as other datasets, were used to conduct studies on the behavior of the network and network users as well as the security issues affecting the Internet. A flow-based comparison of honeynet traffic, representing malicious traffic, and NETI@home traffic, representing typical end-user traffic, was conducted. This comparison showed that a large portion of flows in both datasets were failed and potentially malicious connection attempts. We additionally found that worm activity can linger for more than a year after the initial release date. Malicious traffic was also found to originate from across the allocated IP address space. Other security-related observations made include the suspicious use of ICMP packets and attacks on our own NETI@home server. Utilizing observed TTL values, studies were also conducted into the distance of Internet routes and the frequency with which they vary. The frequency and use of network address translation and the private IP address space were also discussed. Various protocol options and flags were analyzed to determine their adoption and use by the Internet community. Network-independent empirical models of end-user network traffic were derived for use in simulation. Two such models were created. The first modeled traffic for a specific TCP or UDP port and the second modeled all TCP or UDP traffic for an end-user. These models were implemented and used in GTNetS. Further anonymization of the dataset and the public release of the anonymized data and their associated analysis tools were also discussed.
495

End-to-End Security of Information Flow in Web-based Applications

Singaravelu, Lenin 25 June 2007 (has links)
Web-based applications and services are increasingly being used in security-sensitive tasks. Current security protocols rely on two crucial assumptions to protect the confidentiality and integrity of information: First, they assume that end-point software used to handle security-sensitive information is free from vulnerabilities. Secondly, these protocols assume point-to-point communication between a client and a service provider. However, these assumptions do not hold true with large and complex vulnerable end point software such as the Internet browser or web services middleware or in web service compositions where there can be multiple value-adding service providers interposed between a client and the original service provider. To address the problem of large and complex end-point software, we present the AppCore approach which uses manual analysis of information flow, as opposed to purely automated approaches, to split existing software into two parts: a simplified trusted part that handles security-sensitive information and a legacy, untrusted part that handles non-sensitive information without access to sensitive information. Not only does this approach avoid many common and well-known vulnerabilities in the legacy software that compromised sensitive information, it also greatly reduces the size and complexity of the trusted code, thereby making exhaustive testing or formal analysis more feasible. We demonstrate the feasibility of the AppCore approach by constructing AppCores for two real-world applications: a client-side AppCore for https-based applications and an AppCore for web service platforms. Our evaluation shows that security improvements and complexity reductions (over a factor of five) can be attained with minimal modifications to existing software (a few tens of lines of code, and proxy settings of a browser) and an acceptable performance overhead (a few percent). To protect the communication of sensitive information between the clients and service providers in web service compositions, we present an end-to-end security framework called WS-FESec that provides end-to-end security properties even in the presence of misbehaving intermediate services. We show that WS-FESec is flexible enough to support the lattice model of secure information flow and it guarantees precise security properties for each component service at a modest cost of a few milliseconds per signature or encrypted field.
496

Impact of wireless losses on the predictability of end-to-end flow characteristics in Mobile IP Networks

Bhoite, Sameer Prabhakarrao 17 February 2005 (has links)
Technological advancements have led to an increase in the number of wireless and mobile devices such as PDAs, laptops and smart phones. This has resulted in an ever- increasing demand for wireless access to the Internet. Hence, wireless mobile traffic is expected to form a significant fraction of Internet traffic in the near future, over the so-called Mobile Internet Protocol (MIP) networks. For real-time applications, such as voice, video and process monitoring and control, deployed over standard IP networks, network resources must be properly allocated so that the mobile end-user is guaranteed a certain Quality of Service (QoS). As with the wired and fixed IP networks, MIP networks do not offer any QoS guarantees. Such networks have been designed for non-real-time applications. In attempts to deploy real-time applications in such networks without requiring major network infrastructure modifications, the end-points must provide some level of QoS guarantees. Such QoS guarantees or QoS control, requires ability of predictive capabilities of the end-to-end flow characteristics. In this research network flow accumulation is used as a measure of end-to-end network congestion. Careful analysis and study of the flow accumulation signal shows that it has long-term dependencies and it is very noisy, thus making it very difficult to predict. Hence, this work predicts the moving average of the flow accumulation signal. Both single-step and multi-step predictors are developed using linear system identification techniques. A multi-step prediction error of up to 17% is achieved for prediction horizon of up to 0.5sec. The main thrust of this research is on the impact of wireless losses on the ability to predict end-to-end flow accumulation. As opposed to wired, congestion related packet losses, the losses occurring in a wireless channel are to a large extent random, making the prediction of flow accumulation more challenging. Flow accumulation prediction studies in this research demonstrate that, if an accurate predictor is employed, the increase in prediction error is up to 170% when wireless loss reaches as high as 15% , as compared to the case of no wireless loss. As the predictor accuracy in the case of no wireless loss deteriorates, the impact of wireless losses on the flow accumulation prediction error decreases.
497

Estimation de l’écart type du délai de bout-en-bout par méthodes passives / Passive measurement in Software Defined Networks

Nguyen, Huu-Nghi 09 March 2017 (has links)
Depuis l'avènement du réseau Internet, le volume de données échangées sur les réseaux a crû de manière exponentielle. Le matériel présent sur les réseaux est devenu très hétérogène, dû entre autres à la multiplication des "middleboxes" (parefeux, routeurs NAT, serveurs VPN, proxy, etc.). Les algorithmes exécutés sur les équipements réseaux (routage, “spanning tree”, etc.) sont souvent complexes, parfois fermés et propriétaires et les interfaces de supervision peuvent être très différentes d'un constructeur/équipement à un autre. Ces différents facteurs rendent la compréhension et le fonctionnement du réseau complexe. Cela a motivé la définition d'un nouveau paradigme réseaux afin de simplifier la conception et la gestion des réseaux : le SDN (“Software-defined Networking”). Il introduit la notion de contrôleur, qui est un équipement qui a pour rôle de contrôler les équipements du plan de données. Le concept SDN sépare donc le plan de données chargés de l'acheminement des paquets, qui est opéré par des équipements nommés virtual switches dans la terminologie SDN, et le plan contrôle, en charge de toutes les décisions, et qui est donc effectué par le contrôleur SDN. Pour permettre au contrôleur de prendre ses décisions, il doit disposer d'une vue globale du réseau. En plus de la topologie et de la capacité des liens, des critères de performances comme le délai, le taux de pertes, la bande passante disponible, peuvent être pris en compte. Cette connaissance peut permettre par exemple un routage multi-classes, ou/et garantir des niveaux de qualité de service. Les contributions de cette thèse portent sur la proposition d'algorithmes permettant à une entité centralisée, et en particulier à un contrôleur dans un cadre SDN, d'obtenir des estimations fiables du délai de bout-en-bout pour les flux traversant le réseau. Les méthodes proposées sont passives, c'est-à-dire qu'elles ne génèrent aucun trafic supplémentaire. Nous nous intéressons tout particulièrement à la moyenne et l'écart type du délai. Il apparaît que le premier moment peut être obtenu assez facilement. Au contraire, la corrélation qui apparaît dans les temps d'attentes des noeuds du réseau rend l'estimation de l'écart type beaucoup plus complexe. Nous montrons que les méthodes développées sont capables de capturer les corrélations des délais dans les différents noeuds et d'offrir des estimations précises de l'écart type. Ces résultats sont validés par simulations où nous considérons un large éventail de scénarios permettant de valider nos algorithmes dans différents contextes d'utilisation / Since the early beginning of Internet, the amount of data exchanged over the networks has exponentially grown. The devices deployed on the networks are very heterogeneous, because of the growing presence of middleboxes (e.g., firewalls, NAT routers, VPN servers, proxy). The algorithms run on the networking devices (e.g., routing, spanning tree) are often complex, closed, and proprietary while the interfaces to access these devices typically vary from one manufacturer to the other. All these factors tend to hinder the understanding and the management of networks. Therefore a new paradigm has been introduced to ease the design and the management of networks, namely, the SDN (Software-defined Networking). In particular, SDN defines a new entity, the controller that is in charge of controlling the devices belonging to the data plane. Thus, in a SDN-network, the data plane, which is handled by networking devices called virtual switches, and the control plane, which takes the decisions and executed by the controller, are separated. In order to let the controller take its decisions, it must have a global view on the network. This includes the topology of the network and its links capacity, along with other possible performance metrics such delays, loss rates, and available bandwidths. This knowledge can enable a multi-class routing, or help guarantee levels of Quality of Service. The contributions of this thesis are new algorithms that allow a centralized entity, such as the controller in an SDN network, to accurately estimate the end-to-end delay for a given flow in its network. The proposed methods are passive in the sense that they do not require any additional traffic to be run. More precisely, we study the expectation and the standard deviation of the delay. We show how the first moment can be easily computed. On the other hand, estimating the standard deviation is much more complex because of the correlations existing between the different waiting times. We show that the proposed methods are able to capture these correlations between delays and thus providing accurate estimations of the standard deviation of the end-to-end delay. Simulations that cover a large range of possible scenariosvalidate these results
498

Allocation temporelle de systèmes avioniques modulaires embarqués / Temporal allocation in distributed modular avionics systems

Badache, Nesrine 27 May 2016 (has links)
L'évolution des architectures des systèmes embarqués temps réel vers des architectures modulaires a permis d'introduire plus de fonctionnalités grâce à l'utilisation de calculateurs répartis et d'interfaces de communication et de service standardisés. Nous nous intéressons dans cette thèse à l'architecture avionique modulaire (IMA) des standards ARINC 653 et ARINC 664 partie 7. Cette évolution a introduit de nouveaux défis de conception relatifs, entre autres, au respect des contraintes temporelles applicatives nécessaires au bon fonctionnement du système. La conception d'un système modulaire est un problème d'intégration sous contraintes, qui regroupe plusieurs problèmes difficiles (dimensionnement, allocation de ressource spatiales et temporelles). Ces difficultés requièrent la mise en place d'outils d'aide à l'intégration qui passent à l'échelle. C'est dans ce cadre-là que ces travaux de thèse ont été menés. Nous nous intéressons principalement à l'allocation des ressources temporelles du système. Plus particulièrement, nous déterminons les périodes d'exécution des fonctions embarquées distribuées qui garantissent les contraintes temporelles applicatives et qui offrent un degré d'évolutivité du système élevé, étant donné une répartition des fonctions sur les calculateurs. Notre démarche prend en compte la variabilité temporelle (bornée) du réseau de communication La première contribution de cette thèse est la formulation du problème d'intégration d'un système modulaire IMA en un problème d'optimisation multi-critère à contraintes temporelles. Pour une distribution des fonctions avioniques aux calculateurs, la périodicité des partitions IMA est recherchée de façon à garantir la fraîcheur et la non-perte des données transmises. Parmi toutes les allocations temporelles vérifiant les contraintes temporelles, nous réalisons une recherche multi-critères qui optimise à la fois un critère de charge des calculateurs et de marge temporelle dans le réseau. Ces deux critères facilitent les évolutions futures de l’architecture. La seconde contribution de cette thèse est la proposition de deux heuristiques de recherche multi-critère adaptées à notre problème. Il faut noter que le nombre d'allocations temporelles valides grandit exponentiellement avec le nombre de modules et de partitions hébergées par module. Nous proposons deux algorithmes d'optimisation multi-critères : (i) EXHAUST, un algorithme optimal de recherche exhaustive, (ii) TABOU un algorithme semi-optimal basé sur une métaheuristique Tabou. Pour les deux algorithmes, la cardinalité du problème est réduite par une phase d'optimisation locale à chaque module, rendue possible par la linéarité des deux métriques choisies. Cette première étape d'optimisation locale permet de résoudre à l'optimal le problème d'allocation avec EXHAUST pour un système IMA de taille moyenne. Nous montrons que pour des systèmes de grande taille, l'algorithme TABOU est un très bon candidat car il extrait des solutions satisfaisantes en un temps raisonnable, tout en testant un nombre limité d'allocations valides. Ces deux heuristiques sont appliquées à un système IMA. L'analyse des solutions obtenues nous permet de mettre en exergue la qualité des solutions Pareto-optimales obtenues par les deux algorithmes. Elles présentent les caractéristiques recherchées d'évolutivité de la charge des calculateurs et de la marge réseau. Notre dernière contribution réside dans une analyse fine de ces solutions. L'analyse met en avant différentes classes de solutions Pareto-optimales avec différent compromis entre la charge et la marge réseau. La connaissance de ces classes de solutions permet à l'intégrateur de choisir une solution lui fournissant le compromis qu'il recherche entre les critères de charge et de marge réseau. / The evolution of real-time embedded systems architectures to modular architectures has introduced more functionality through the use of distributed computers and communication interfaces and standardized service. We focus in this thesis on Integrated modular avionics architectures (IMA) standardized in ARINC 653 and ARINC 664 standard Part 7. This development has introduced new design challenges, among others, as respect for application timing constraints mandatory for the proper functioning of systems. The design of a modular system is an integration problem under constraints which features some difficult issues (design, spatial and temporal resource allocation). These difficulties require implementation of tools for integration that go to scale. It is, in this context, that the thesis work was conducted. We are interested primarily to the allocation of time resources of the system. In particular, we determine the execution time of distributed embedded functions that guarantee the application time constraints and offer a high degree of scalability of the system, given a distribution of functions on computers. Our approach takes into account the temporal variability (bounded variability) of the communication network. The first contribution of this thesis is the formulation of the problem of integration of an IMA system in a multi-criteria optimization problem with time constraints. For a distribution of avionics functions on computers, execution periods of IMA partitions are sought in order to ensure freshness and non-loss of transmitted data. Among all temporary allocations satisfying the time constraints, we perform a multi-criteria search that optimizes both load test calculators and time buffer in the network. These two criteria facilitate the future development of architecture. The second contribution of this thesis is the proposal of two multi-criteria search heuristics adapted to our problem. Note that the number of valid temporary allocations grows exponentially with the number of modules and partitions hosted on them. We offer two multi-criteria optimization algorithms: (i) EXHAUST, optimal exhaustive search algorithm, (ii) TABOO a semi-optimal algorithm based on a metaheuristic Tabu. For both algorithms, the cardinality of the problem is reduced by a local optimization phase for each module, made possible by the linearity of the two selected metric. This first local optimization step solves the problem of optimal allocation with EXHAUST for IMA system of medium size. We show that for large systems, the TABOO algorithm is a very good candidate because it extracts satisfactory solutions in a reasonable time while testing a limited number of valid allocations. These two heuristics are applied to an IMA system example. The analysis of the solutions obtained allows us to highlight the quality of Pareto-optimal solutions obtained by both algorithms. They have the characteristics sought scalability of the load of the computers and network margin. Our latest contribution lies in a detailed analysis of these solutions. The analysis highlights different classes of Pareto Optimal solutions with different compromise between the load of the system and the network margin. The knowledge of these solutions allows the system Integrator to choose a solution among solution classes that offer the compromise between the search criteria and network load margin.
499

Högpristjänster mot lägre marknadssegment : En kvalitativ studie av SaaS-företag och dess ompositionering mot lägre marknadssegment

Stenström, Simon, Pege, Victor January 2018 (has links)
Företag använder marknadssegmentering för att dela in kunder i grupper utefter liknande karaktäristika, som exempelvis betalningsvilja. Företag kan i sinom tid möta en mättad marknad inom dess segment, och behöver då ompositionera strategier mot nya segment för fortsatt kundtillväxt. För företag som valt en positionering mot segment med hög betalningsvilja (high-end), är en möjlighet för fortsatt kundtillväxt istället ompositionering mot lägre segment (low-end). Syftet med studien var att kartlägga och identifiera hur företag besittande en högpristjänst, kan nå kunder med lägre betalningsvilja genom nedåtgående segmentering. Detta utan att skada varumärket och riskera förluster av befintliga kunder. Utifrån syftet fastslogs följande frågeställning: • Hur kan SaaS-företag som besitter en high-end tjänst, använda nedåtgående segmentering utan att skada varumärket och förlora befintliga kunder? Studiens avgränsning begränsades till SaaS-företag, en delbransch inom IT-branschen. Detta med anledning av dess unika, relativt obefintliga, marginalkostnad. Studien har genomförts genom att använda en kvalitativ forskningsmetodik, i form av fem semistrukturerade intervjuer med fyra företag ståendes inför det problem som berörs. Studien består genomgående av en struktur baserad på blocken segmentering, tillväxt, lönsamhet samt varumärke. Studiens resultat, baserad på insamlad empiri och analys, har lett till en slutsats som påvisat fyra stycken beståndsdelar. Dessa ter sig vara faktorer som en high-end tjänst bör inneha, för en lyckad strategiförändring mot lägre marknadssegment. Dessa fyra kriterier är marknadsmedvetenhet, skalbar tjänst, flerstegig prismodell samt varumärkesmedvetenhet. / Companies use market segmentation to divide customers into groups along similar characteristics, such as willingness to pay. Companies could eventually meet a saturated market within its segments, and would then need to reposition towards new segments for continued customer growth. For companies that choose to position themselves against segments with high willingness to pay (high-end), a possibility of continued customer growth is to position segments with lower willingness to pay (low- end). The purpose of this thesis was to plot and identify how companies, proprietors of a high-end solution, can use downward segmentation to reach customers with lower willingness to pay. This without damaging the brand and risking losses of existing customers. Based on the purpose of the thesis, the following research question has been established: • How can SaaS-companies that have a high-end service, use downward segmentation without damaging their brand and losing existing customers? The delimitation of the study was limited to SaaS companies, a sub-sector in the IT industry. This is due to its unique, relatively non-existent, marginal cost. The study has been conducted using a qualitative research method, in the form of five semi structured interviews with four companies facing the problem concerned. The study is continuously conducted through a structure based on segmentation, growth, profitability and brand. The results of the study, based on collected empirical data and the analysis, have led to a conclusion containing four specific components. These appear to be factors that a high-end service should hold for a successful strategy change towards lower market segments. These four criteria are market awareness, scalable service, multi-pricing model and brand awareness. This thesis is written in Swedish.
500

Avaliação da influência da consultoria em iniciativas da fase Fuzzy Front End / Evaluation of the influence of the consultancy in the Fuzzy Front End initiatives

Claudio Marcos Vigna 06 April 2017 (has links)
O presente trabalho tem como principal objetivo estudar a influência da consultoria em iniciativas de Fuzzy Front End (FFE). Para tanto, conceituam-se os principais constructos que sustentam a análise, quais sejam: FFE, Nebulosidade, Modelo de Gestão de FFE e Modelo de Tomada de Decisão. Com o intuito de entender melhor o fenômeno em questão, o FFE, avalia-se, através de revisão bibliográfica, quais sãos os principais elementos capacitadores em gestão de FFE para os processos de inovação de produtos em empresas. Nesse sentido, foi elaborado um modelo de gestão FFE, com base em modelo preexistente, aprimorado com modelos de tomada de decisão. Foram empregados a metodologia de Estudo de Caso e o método de Extended Case Method, aplicado a consultorias que executam iniciativas de FFE em empresas clientes. A questão de pesquisa a ser respondida foi: Qual a influência das consultorias em iniciativas de FFE de empresas clientes? Foi construído também um modelo de geração de ideia para entender a gênese do surgimento da ideia em iniciativas de FFE. Da mesma forma, foi realizado o mapeamento do FFE dentro da ciência da administração, mostrando que o FFE deve ser entendido como o motor de geração e seleção de ideias para quaisquer tipos de mudança, e, para isso, recorreu-se à Teoria da Decisão, a fim de fazer a integração do FFE com os demais temas de ciências. Como resultado, este trabalho traz uma visão do FFE, verificando a aplicabilidade prática dos elementos capacitadores levantados na revisão da literatura; um modelo de gestão da dinâmica de interação e geração de competência entre consultorias e empresas clientes; bem como o reconhecimento do valor gerado pelas consultorias em iniciativas de FFE. / The main objective of this work is to study the influence of consulting on Fuzzy Front End initiatives (FFE). For this, the main constructs that support the analysis are: FFE, Fuzziness, FFE Management Model, Decision Making Model and Consulting. In order to better understand the phenomenon in question, the FFE, it is evaluated, through a bibliographic review, which are the main enablers in FFE management for the processes of product innovation in companies. In this sense, an FFE management model was elaborated, based on a preexisting model, improved with decision-making models. We used the Case Study methodology and the Extended Case Method, applied to consultancies that execute FFE initiatives in client companies. The research question is: What is the influence of consulting companies in FFE of client\'s enterprises? A model of idea generation was also designed to understand the genesis of the idea\'s emergence in FFE initiatives. In the same way, the mapping of the FFE within the science of administration was carried out, showing that the FFE should be understood as the engine of generation and selection of ideas for any type of change, and for this, we used Decision Theory, In order to integrate the FFE with other science subjects. As a result, this work brings a complete view of the FFE, verifying the practical applicability of the enablers in the literature; A management model of interaction dynamics and competence generation between consultancies and client companies; as well as the recognition of the value generated by consultancies in FFE initiatives.

Page generated in 0.0718 seconds