• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 293
  • 24
  • 21
  • 16
  • 9
  • 7
  • 7
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 479
  • 479
  • 119
  • 101
  • 99
  • 88
  • 68
  • 64
  • 62
  • 55
  • 50
  • 47
  • 47
  • 46
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
191

Physics-Informed, Data-Driven Framework for Model-Form Uncertainty Estimation and Reduction in RANS Simulations

Wang, Jianxun 05 April 2017 (has links)
Computational fluid dynamics (CFD) has been widely used to simulate turbulent flows. Although an increased availability of computational resources has enabled high-fidelity simulations (e.g. large eddy simulation and direct numerical simulation) of turbulent flows, the Reynolds-Averaged Navier-Stokes (RANS) equations based models are still the dominant tools for industrial applications. However, the predictive capability of RANS models is limited by potential inaccuracies driven by hypotheses in the Reynolds stress closure. With the ever-increasing use of RANS simulations in mission-critical applications, the estimation and reduction of model-form uncertainties in RANS models have attracted attention in the turbulence modeling community. In this work, I focus on estimating uncertainties stemming from the RANS turbulence closure and calibrating discrepancies in the modeled Reynolds stresses to improve the predictive capability of RANS models. Both on-line and off-line data are utilized to achieve this goal. The main contributions of this dissertation can be summarized as follows: First, a physics-based, data-driven Bayesian framework is developed for estimating and reducing model-form uncertainties in RANS simulations. An iterative ensemble Kalman method is employed to assimilate sparse on-line measurement data and empirical prior knowledge for a full-field inversion. The merits of incorporating prior knowledge and physical constraints in calibrating RANS model discrepancies are demonstrated and discussed. Second, a random matrix theoretic framework is proposed for estimating model-form uncertainties in RANS simulations. Maximum entropy principle is employed to identify the probability distribution that satisfies given constraints but without introducing artificial information. Objective prior perturbations of RANS-predicted Reynolds stresses in physical projections are provided based on comparisons between physics-based and random matrix theoretic approaches. Finally, a physics-informed, machine learning framework towards predictive RANS turbulence modeling is proposed. The functional forms of model discrepancies with respect to mean flow features are extracted from the off-line database of closely related flows based on machine learning algorithms. The RANS-modeled Reynolds stresses of prediction flows can be significantly improved by the trained discrepancy function, which is an important step towards the predictive turbulence modeling. / Ph. D.
192

Pipelines for Computational Social Science Experiments and Model Building

Cedeno, Vanessa Ines 12 July 2019 (has links)
There has been significant growth in online social science experiments in order to understand behavior at-scale, with finer-grained data collection. Considerable work is required to perform data analytics for custom experiments. In this dissertation, we design and build composable and extensible automated software pipelines for evaluating social phenomena through iterative experiments and modeling. To reason about experiments and models, we design a formal data model. This combined approach of experiments and models has been done in some studies without automation, or purely conceptually. We are motivated by a particular social behavior, namely collective identity (CI). Group or CI is an individual's cognitive, moral, and emotional connection with a broader community, category, practice, or institution. Extensive experimental research shows that CI influences human decision-making. Because of this, there is interest in modeling situations that promote the creation of CI in order to learn more from the process and to predict human behavior in real life situations. One of our goals in this dissertation is to understand whether a cooperative anagram game can produce CI within a group. With all of the experimental work on anagram games, it is surprising that very little work has been done in modeling these games. Also, abduction is an inference approach that uses data and observations to identify plausibly (and preferably, best) explanations for phenomena. Abduction has broad application in robotics, genetics, automated systems, and image understanding, but have largely been devoid of human behavior. We use these pipelines to understand intra-group cooperation and its effect on fostering CI. We devise and execute an iterative abductive analysis process that is driven by the social sciences. In a group anagrams web-based networked game setting, we formalize an abductive loop, implement it computationally, and exercise it; we build and evaluate three agent-based models (ABMs) through a set of composable and extensible pipelines; we also analyze experimental data and develop mechanistic and data-driven models of human reasoning to predict detailed game player action. The agreement between model predictions and experimental data indicate that our models can explain behavior and provide novel experimental insights into CI. / Doctor of Philosophy / To understand individual and collective behavior, there has been significant interest in using online systems to carry out social science experiments. Considerable work is required for analyzing the data and to uncover interesting insights. In this dissertation, we design and build automated software pipelines for evaluating social phenomena through iterative experiments and modeling. To reason about experiments and models, we design a formal data model. This combined approach of experiments and models has been done in some studies without automation, or purely conceptually. We are motivated by a particular social behavior, namely collective identity (CI). Group or CI is an individual’s cognitive, moral, and emotional connection with a broader community, category, practice, or institution. Extensive experimental research shows that CI influences human decision-making, so there is interest in modeling situations that promote the creation of CI to learn more from the process and to predict human behavior in real life situations. One of our goals in this dissertation is to understand whether a cooperative anagram game can produce CI within a group. With all of the experimental work on anagrams games, it is surprising that very little work has been done in modeling these games. In addition, to identify best explanations for phenomena we use abduction. Abduction is an inference approach that uses data and observations. Abduction has broad application in robotics, genetics, automated systems, and image understanding, but have largely been devoid of human behavior. In a group anagrams web-based networked game setting we do the following. We use these pipelines to understand intra-group cooperation and its effect on fostering CI. We devise and execute an iterative abductive analysis process that is driven by the social sciences. We build and evaluate three agent-based models (ABMs). We analyze experimental data and develop models of human reasoning to predict detailed game player action. We claim our models can explain behavior and provide novel experimental insights into CI, because there is agreement between the model predictions and the experimental data.
193

Heavy Tails and Anomalous Diffusion in Human Online Dynamics

Wang, Xiangwen 28 February 2019 (has links)
In this dissertation, I extend the analysis of human dynamics to human movements in online activities. My work starts with a discussion of the human information foraging process based on three large collections of empirical search click-through logs collected in different time periods. With the analogy of viewing the click-through on search engine result pages as a random walk, a variety of quantities like the distributions of step length and waiting time as well as mean-squared displacements, correlations and entropies are discussed. Notable differences between the different logs reveal an increased efficiency of the search engines, which is found to be related to the vanishing of the heavy-tailed characteristics of step lengths in newer logs as well as the switch from superdiffusion to normal diffusion in the diffusive processes of the random walks. In the language of foraging, the newer logs indicate that online searches overwhelmingly yield local searches, whereas for the older logs the foraging processes are a combination of local searches and relocation phases that are power-law distributed. The investigation highlights the presence of intermittent search processes in online searches, where phases of local explorations are separated by power-law distributed relocation jumps. In the second part of this dissertation I focus on an in-depth analysis of online gambling behaviors. For this analysis the collected empirical gambling logs reveal the wide existence of heavy-tailed statistics in various quantities in different online gambling games. For example, when players are allowed to choose arbitrary bet values, the bet values present log-normal distributions, meanwhile if they are restricted to use items as wagers, the distribution becomes truncated power laws. Under the analogy of viewing the net change of income of each player as a random walk, the mean-squared displacement and first-passage time distribution of these net income random walks both exhibit anomalous diffusion. In particular, in an online lottery game the mean-squared displacement presents a crossover from a superdiffusive to a normal diffusive regime, which is reproduced using simulations and explained analytically. This investigation also reveals the scaling characteristics and probability reweighting in risk attitude of online gamblers, which may help to interpret behaviors in economic systems. This work was supported by the US National Science Foundation through grants DMR-1205309 and DMR-1606814. / Ph. D. / Humans are complex, meanwhile understanding the complex human behaviors is of crucial importance in solving many social problems. In recent years, socio physicists have made substantial progress in human dynamics research. In this dissertation, I extend this type of analysis to human movements in online activities. My work starts with a discussion of the human information foraging process. This investigation is based on empirical search logs and an analogy of viewing the click-through on search engine result pages as a random walk. With an increased efficiency of the search engines, the heavy-tailed characteristics of step lengths disappear, and the diffusive processes of the random walkers switch from superdiffusion to normal diffusion. In the language of foraging, the newer logs indicate that online searches overwhelmingly yield local searches, whereas for the older logs the foraging processes are a combination of local searches and relocation phases that are power-law distributed. The investigation highlights the presence of intermittent search processes in online searches, where phases of local explorations are separated by power-law distributed relocation jumps. In the second part of this dissertation I focus on an in-depth analysis of online gambling behaviors, where the collected empirical gambling logs reveal the wide existence of heavy-tailed statistics in various quantities. Using an analogy of viewing the net change of income of each player as a random walk, the mean-squared displacement and first-passage time distribution of these net income random walks exhibit anomalous diffusion. This investigation also reveals the scaling characteristics and probability reweighting in risk attitude of online gamblers, which may help to interpret behaviors in economic systems. This work was supported by the US National Science Foundation through grants DMR-1205309 and DMR-1606814.
194

Predictive Turbulence Modeling with Bayesian Inference and Physics-Informed Machine Learning

Wu, Jinlong 25 September 2018 (has links)
Reynolds-Averaged Navier-Stokes (RANS) simulations are widely used for engineering design and analysis involving turbulent flows. In RANS simulations, the Reynolds stress needs closure models and the existing models have large model-form uncertainties. Therefore, the RANS simulations are known to be unreliable in many flows of engineering relevance, including flows with three-dimensional structures, swirl, pressure gradients, or curvature. This lack of accuracy in complex flows has diminished the utility of RANS simulations as a predictive tool for engineering design, analysis, optimization, and reliability assessments. Recently, data-driven methods have emerged as a promising alternative to develop the model of Reynolds stress for RANS simulations. In this dissertation I explore two physics-informed, data-driven frameworks to improve RANS modeled Reynolds stresses. First, a Bayesian inference framework is proposed to quantify and reduce the model-form uncertainty of RANS modeled Reynolds stress by leveraging online sparse measurement data with empirical prior knowledge. Second, a machine-learning-assisted framework is proposed to utilize offline high-fidelity simulation databases. Numerical results show that the data-driven RANS models have better prediction of Reynolds stress and other quantities of interest for several canonical flows. Two metrics are also presented for an a priori assessment of the prediction confidence for the machine-learning-assisted RANS model. The proposed data-driven methods are also applicable to the computational study of other physical systems whose governing equations have some unresolved physics to be modeled. / Ph. D. / Reynolds-Averaged Navier–Stokes (RANS) simulations are widely used for engineering design and analysis involving turbulent flows. In RANS simulations, the Reynolds stress needs closure models and the existing models have large model-form uncertainties. Therefore, the RANS simulations are known to be unreliable in many flows of engineering relevance, including flows with three-dimensional structures, swirl, pressure gradients, or curvature. This lack of accuracy in complex flows has diminished the utility of RANS simulations as a predictive tool for engineering design, analysis, optimization, and reliability assessments. Recently, data-driven methods have emerged as a promising alternative to develop the model of Reynolds stress for RANS simulations. In this dissertation I explore two physics-informed, data-driven frameworks to improve RANS modeled Reynolds stresses. First, a Bayesian inference framework is proposed to quantify and reduce the model-form uncertainty of RANS modeled Reynolds stress by leveraging online sparse measurement data with empirical prior knowledge. Second, a machine-learning-assisted framework is proposed to utilize offline high fidelity simulation databases. Numerical results show that the data-driven RANS models have better prediction of Reynolds stress and other quantities of interest for several canonical flows. Two metrics are also presented for an a priori assessment of the prediction confidence for the machine-learning-assisted RANS model. The proposed data-driven methods are also applicable to the computational study of other physical systems whose governing equations have some unresolved physics to be modeled.
195

An evaluation of a data-driven approach to regional scale surface runoff modelling

Zhang, Ruoyu 03 August 2018 (has links)
Modelling surface runoff can be beneficial to operations within many fields, such as agriculture planning, flood and drought risk assessment, and water resource management. In this study, we built a data-driven model that can reproduce monthly surface runoff at a 4-km grid network covering 13 watersheds in the Chesapeake Bay area. We used a random forest algorithm to build the model, where monthly precipitation, temperature, land cover, and topographic data were used as predictors, and monthly surface runoff generated by the SWAT hydrological model was used as the response. A sub-model was developed for each of 12 monthly surface runoff estimates, independent of one another. Accuracy statistics and variable importance measures from the random forest algorithm reveal that precipitation was the most important variable to the model, but including climatological data from multiple months as predictors significantly improves the model performance. Using 3-month climatological, land cover, and DEM derivatives from 40% of the 4-km grids as the training dataset, our model successfully predicted surface runoff for the remaining 60% of the grids (mean R2 (RMSE) for the 12 monthly models is 0.83 (6.60 mm)). The lowest R2 was associated with the model for August, when the surface runoff values are least in a year. In all studied watersheds, the highest predictive errors were found within the watershed with greatest topographic complexity, for which the model tended to underestimate surface runoff. For the other 12 watersheds studied, the data-driven model produced smaller and more spatially consistent predictive errors. / Master of Science / Surface runoff data can be valuable to many fields, such as agriculture planning, water resource management, and flood and drought risk assessment. The traditional approach to acquire the surface runoff data is by simulating hydrological models. However, running such models always requires advanced knowledge to watersheds and computation technologies. In this study, we build a statistical model that can reproduce monthly surface runoff at 4-km grid covering 13 watersheds in Chesapeake Bay area. This model uses publicly accessible climate, land cover, and topographic datasets as predictors, and monthly surface runoff from the SWAT model as the response. We develop 12 monthly models for each month, independent to each other. To test whether the model can be applied to generalize the surface runoff for the entire study area, we use 40% of grid data as the training sample and the remainder as validation. The accuracy statistics, the annual mean R2 and RMSE are 0.83 and 6.60 mm, show our model is capable to accurately reproduce monthly surface runoff of our study area. The statistics for August model are not as satisfying as other months’ models. The possible reason is the surface runoff in August is the lowest among the year, thus there is no enough variation for the algorithm to distinguish the minor difference of the response in model building process. When applying the model to watersheds in steep terrain conditions, we need to pay attention to the results in which the error may be relatively large.
196

Commutation Error in Reduced Order Modeling

Koc, Birgul 01 October 2018 (has links)
We investigate the effect of spatial filtering on the recently proposed data-driven correction reduced order model (DDC-ROM). We compare two filters: the ROM projection, which was originally used to develop the DDC-ROM, and the ROM differential filter, which uses a Helmholtz operator to attenuate the small scales in the input signal. We focus on the following questions: ``Do filtering and differentiation with respect to space variable commute, when filtering is applied to the diffusion term?'' or in other words ``Do we have commutation error (CE) in the diffusion term?" and ``If so, is the commutation error data-driven correction ROM (CE-DDC-ROM) more accurate than the original DDC-ROM?'' If the CE exists, the DDC-ROM has two different correction terms: one comes from the diffusion term and the other from the nonlinear convection term. We investigate the DDC-ROM and the CE-DDC-ROM equipped with the two ROM spatial filters in the numerical simulation of the Burgers equation with different diffusion coefficients and two different initial conditions (smooth and non-smooth). / M.S. / We propose reduced order models (ROMs) for an efficient and relatively accurate numerical simulation of nonlinear systems. We use the ROM projection and the ROM differential filters to construct a novel data-driven correction ROM (DDC-ROM). We show that the ROM spatial filtering and differentiation do not commute for the diffusion operator. Furthermore, we show that the resulting commutation error has an important effect on the ROM, especially for low viscosity values. As a mathematical model for our numerical study, we use the one-dimensional Burgers equations with smooth and non-smooth initial conditions.
197

Neural Network Gaussian Process considering Input Uncertainty and Application to Composite Structures Assembly

Lee, Cheol Hei 18 May 2020 (has links)
Developing machine learning enabled smart manufacturing is promising for composite structures assembly process. It requires accurate predictive analysis on deformation of the composite structures to improve production quality and efficiency of composite structures assembly. The novel composite structures assembly involves two challenges: (i) the highly nonlinear and anisotropic properties of composite materials; and (ii) inevitable uncertainty in the assembly process. To overcome those problems, we propose a neural network Gaussian process model considering input uncertainty for composite structures assembly. Deep architecture of our model allows us to approximate a complex system better, and consideration of input uncertainty enables robust modeling with complete incorporation of the process uncertainty. Our case study shows that the proposed method performs better than benchmark methods for highly nonlinear systems. / Master of Science / Composite materials are becoming more popular in many areas due to its nice properties, yet computational modeling of them is not an easy task due to their complex structures. More-over, the real-world problems are generally subject to uncertainty that cannot be observed,and it makes the problem more difficult to solve. Therefore, a successful predictive modeling of composite material for a product is subject to consideration of various uncertainties in the problem.The neural network Gaussian process (NNGP) is one of statistical techniques that has been developed recently and can be applied to machine learning. The most interesting property of NNGP is that it is derived from the equivalent relation between deep neural networks and Gaussian process that have drawn much attention in machine learning fields. However,related work have ignored uncertainty in the input data so far, which may be an inappropriate assumption in real problems.In this paper, we derive the NNGP considering input uncertainty (NNGPIU) based on the unique characteristics of composite materials. Although our motivation is come from the manipulation of composite material, NNGPIU can be applied to any problem where the input data is corrupted by unknown noise. Our work provides how NNGPIU can be derived theoretically; and shows that the proposed method performs better than benchmark methods for highly nonlinear systems.
198

Computational Design of 2D-Mechanical Metamaterials

McMillan, Kiara Lia 22 June 2022 (has links)
Mechanical metamaterials are novel materials that display unique properties from their underlying microstructure topology rather than the constituent material they are made from. Their effective properties displayed at macroscale depend on the design of their microstructural topology. In this work, two classes of mechanical metamaterials are studied within the 2D-space. The first class is made of trusses, referred to as truss-based mechanical metamaterials. These materials are studied through their application to a beam component, where finite element analysis is performed to determine how truss-based microstructures affect the displacement behavior of the beam. This analysis is further subsidized with the development of a graphical user interface, where users can design a beam made of truss-based microstructures to see how their design affects the beam's behavior. The second class of mechanical metamaterial investigated is made of self-assembled structures, called spinodoids. Their smooth topology makes them less prone to high stress concentrations present in truss-based mechanical metamaterials. A large database of spinodoids is generated in this study. Through data-driven modeling the geometry of the spinodoids is coupled with their Young's modulus value to approach inverse design under uncertainty. To see mechanical metamaterials applied to industry they need to be better understood and thoroughly characterized. Furthermore, more tools that specifically help push the ease in the design of these metamaterials are needed. This work aims to improve the understanding of mechanical metamaterials and develop efficient computational design strategies catered solely for them. / Master of Science / Mechanical metamaterials are hierarchical materials involving periodically or aperiodically repeating unit cell arrangements in the microscale. The design of the unit cells allows these materials to display unique properties that are not usually found in traditionally manufactured materials. This will enable their use in a multitude of potential engineering applications. The presented study seeks to explore two classes of mechanical metamaterials within the 2D-space, including truss-based architectures and spinodoids. Truss-based mechanical metamaterials are made of trusses arranged in a lattice-like framework, where spinodoids are unit cells that contain smooth structures resulting from mimicking the two phases that coexist in a phase separation process called spinodal decomposition. In this research, computational design strategies are applied to efficiently model and further understand these sub-classes of mechanical metamaterials.
199

Trustworthy Soft Sensing in Water Supply Systems using Deep Learning

Sreng, Chhayly 22 May 2024 (has links)
In many industrial and scientific applications, accurate sensor measurements are crucial. Instruments such as nitrate sensors are vulnerable to environmental conditions, calibration drift, high maintenance costs, and degrading. Researchers have turned to advanced computational methods, including mathematical modeling, statistical analysis, and machine learning, to overcome these limitations. Deep learning techniques have shown promise in outperforming traditional methods in many applications by achieving higher accuracy, but they are often criticized as 'black-box' models due to their lack of transparency. This thesis presents a framework for deep learning-based soft sensors that can quantify the robustness of soft sensors by estimating predictive uncertainty and evaluating performance across various scenarios. The framework facilitates comparisons between hard and soft sensors. To validate the framework, I conduct experiments using data generated by AI and Cyber for Water and Ag (ACWA), a cyber-physical system water-controlled environment testbed. Afterwards, the framework is tested on real-world environment data from Alexandria Renew Enterprise (AlexRenew), establishing its applicability and effectiveness in practical settings. / Master of Science / Sensors are essential in various industrial systems and offer numerous advantages. Essential to measurement science and technology, it allows reliable high-resolution low-cost measurement and impacts areas such as environmental monitoring, medical applications and security. The importance of sensors extends to Internet of Things (IoT) and large-scale data analytics fields. In these areas, sensors are vital to the generation of data that is used in industries such as health care, transportation and surveillance. Big Data analytics processes this data for a variety of purposes, including health management and disease prediction, demonstrating the growing importance of sensors in data-driven decision making. In many industrial and scientific applications, precision and trustworthiness in measurements are crucial for informed decision-making and maintaining high-quality processes. Instruments such as nitrate sensors are particularly susceptible to environmental conditions, calibration drift, high maintenance costs, and a tendency to become less reliable over time due to aging. The lifespan of these instruments can be as short as two weeks, posing significant challenges. To overcome these limitations, researchers have turned to advanced computational methods, including mathematical modeling, statistical analysis, and machine learning. Traditional methods have had some success, but they often struggle to fully capture the complex dynamics of natural environments. This has led to increased interest in more sophisticated approaches, such as deep learning techniques. Deep learning-based soft sensors have shown promise in outperforming traditional methods in many applications by achieving higher accuracy. However, they are often criticized as "black-box" models due to their lack of transparency. This raises questions about their reliability and trustworthiness, making it critical to assess these aspects. This thesis presents a comprehensive framework for deep learning-based soft sensors. The framework will quantify the robustness of soft sensors by estimating predictive uncertainty and evaluating performance across a range of contextual scenarios, such as weather conditions, flood events, and water parameters. These evaluations will help define the trustworthiness of the soft sensor and facilitate comparisons between hard and soft sensors. To validate the framework, we will conduct experiments using data generated by ACWA, a cyber-physical system water-controlled environment testbed we developed. This will provide a controlled environment to test and refine our framework. Subsequently, we will test the framework on real-world environment data from AlexRenew. This will further establish its applicability and effectiveness in practical settings, providing a robust and reliable tool for sensor data analysis and prediction. Ultimately, this work aims to contribute to the broader field of sensor technology, enhancing our ability to make informed decisions based on reliable and accurate sensor data.
200

Enhancing data-driven marketing through sales-marketing knowledge exchange and collaboration: a dynamic capability perspective : A case study of a high-tech process automation company

Haga, Viktor January 2024 (has links)
Purpose - This study explores how knowledge exchange between sales and marketing can enhance data-driven marketing initiatives for firms in the high-tech process industry. Additionally, the study aims to identify the factors that drive alignment and collaboration between sales and marketing interfaces from a dynamic capability perspective. Method - This master's thesis is an exploratory study with an inductive approach. 10 qualitative interviews were conducted with employees from a high-tech process automation company, specifically those working in marketing and sales roles. The interviews follow a semi-structured approach, and a thematic analysis was performed to examine the empirical findings. Findings - The study emphasizes the significance of collaboration, knowledge exchange, and functional alignment between sales and marketing, in the context of data-driven marketing and the sales lead generation. By applying a dynamic capability framework, the study sheds light on how firms can leverage knowledge exchange and functional alignment to capitalize on market opportunities and gain a competitive advantage.  Theoretical and practical contributions - The study delves into the underexplored realm of data-driven marketing within the high-tech process industry, particularly focusing on the intricate dynamics between sales and marketing functions during the lead generation process. Through its analysis, the research not only enriches theoretical understanding but also offers practical insights for managers in the high-tech process industry, providing some recommendations to enhance collaboration, knowledge exchange, and optimize data-driven marketing initiatives. / Syfte - Denna studie undersöker hur kunskapsutbyte mellan försäljning och marknadsföring kan förbättra datadrivna marknadsföring initiativ för företag inom högteknologisk processindustri. Dessutom syftar studien till att identifiera de faktorer som driver linjering och samarbete mellan försäljning och marknadsföring, ur ett perspektiv från teorier inom Dynamic Capabilities.  Metod - Denna uppsats är en explorativ studie med en induktiv ansats. 10 kvalitativa intervjuer har genomförts med anställda på ett företag inom högteknologisk process automation, specifikt de som arbetar inom marknadsföring och försäljning. Intervjuerna har följt en semi-strukturerad metod och en tematisk analys har genomförts för att undersöka de empiriska resultaten. Resultat - Studien betonar vikten av samarbete, kunskapsutbyte och funktionell linjeringen mellan försäljning och marknadsföring, i kontexten av datadriven marknadsföring och sales lead generation. I studien har teorier inom Dynamic Capabilities belyst studien, mer specifikt hur företag kan utnyttja kunskapsutbyte och funktionell linjeringen för att utnyttja marknadsmöjligheter och skapa konkurrensfördelar. Teoretiska och praktiska bidrag - Studien utforskar det underutvecklade området datadriven marknadsföring inom högteknologisk processindustri, med särskilt fokus på de komplexa dynamikerna mellan försäljnings- och marknadsförings funktioner under processen för sales lead generation. Genom sin analys berikar forskningen inte bara den teoretiska utan erbjuder också praktiska insikter för chefer inom högteknologisk processindustri, med rekommendationer för att förbättra samarbete, kunskapsutbyte och optimera datadrivna marknadsföring initiativ.

Page generated in 0.0308 seconds