• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 289
  • 24
  • 21
  • 16
  • 9
  • 7
  • 7
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 474
  • 474
  • 116
  • 99
  • 98
  • 88
  • 67
  • 62
  • 61
  • 54
  • 48
  • 47
  • 46
  • 45
  • 42
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
201

Computational Design of 2D-Mechanical Metamaterials

McMillan, Kiara Lia 22 June 2022 (has links)
Mechanical metamaterials are novel materials that display unique properties from their underlying microstructure topology rather than the constituent material they are made from. Their effective properties displayed at macroscale depend on the design of their microstructural topology. In this work, two classes of mechanical metamaterials are studied within the 2D-space. The first class is made of trusses, referred to as truss-based mechanical metamaterials. These materials are studied through their application to a beam component, where finite element analysis is performed to determine how truss-based microstructures affect the displacement behavior of the beam. This analysis is further subsidized with the development of a graphical user interface, where users can design a beam made of truss-based microstructures to see how their design affects the beam's behavior. The second class of mechanical metamaterial investigated is made of self-assembled structures, called spinodoids. Their smooth topology makes them less prone to high stress concentrations present in truss-based mechanical metamaterials. A large database of spinodoids is generated in this study. Through data-driven modeling the geometry of the spinodoids is coupled with their Young's modulus value to approach inverse design under uncertainty. To see mechanical metamaterials applied to industry they need to be better understood and thoroughly characterized. Furthermore, more tools that specifically help push the ease in the design of these metamaterials are needed. This work aims to improve the understanding of mechanical metamaterials and develop efficient computational design strategies catered solely for them. / Master of Science / Mechanical metamaterials are hierarchical materials involving periodically or aperiodically repeating unit cell arrangements in the microscale. The design of the unit cells allows these materials to display unique properties that are not usually found in traditionally manufactured materials. This will enable their use in a multitude of potential engineering applications. The presented study seeks to explore two classes of mechanical metamaterials within the 2D-space, including truss-based architectures and spinodoids. Truss-based mechanical metamaterials are made of trusses arranged in a lattice-like framework, where spinodoids are unit cells that contain smooth structures resulting from mimicking the two phases that coexist in a phase separation process called spinodal decomposition. In this research, computational design strategies are applied to efficiently model and further understand these sub-classes of mechanical metamaterials.
202

Trustworthy Soft Sensing in Water Supply Systems using Deep Learning

Sreng, Chhayly 22 May 2024 (has links)
In many industrial and scientific applications, accurate sensor measurements are crucial. Instruments such as nitrate sensors are vulnerable to environmental conditions, calibration drift, high maintenance costs, and degrading. Researchers have turned to advanced computational methods, including mathematical modeling, statistical analysis, and machine learning, to overcome these limitations. Deep learning techniques have shown promise in outperforming traditional methods in many applications by achieving higher accuracy, but they are often criticized as 'black-box' models due to their lack of transparency. This thesis presents a framework for deep learning-based soft sensors that can quantify the robustness of soft sensors by estimating predictive uncertainty and evaluating performance across various scenarios. The framework facilitates comparisons between hard and soft sensors. To validate the framework, I conduct experiments using data generated by AI and Cyber for Water and Ag (ACWA), a cyber-physical system water-controlled environment testbed. Afterwards, the framework is tested on real-world environment data from Alexandria Renew Enterprise (AlexRenew), establishing its applicability and effectiveness in practical settings. / Master of Science / Sensors are essential in various industrial systems and offer numerous advantages. Essential to measurement science and technology, it allows reliable high-resolution low-cost measurement and impacts areas such as environmental monitoring, medical applications and security. The importance of sensors extends to Internet of Things (IoT) and large-scale data analytics fields. In these areas, sensors are vital to the generation of data that is used in industries such as health care, transportation and surveillance. Big Data analytics processes this data for a variety of purposes, including health management and disease prediction, demonstrating the growing importance of sensors in data-driven decision making. In many industrial and scientific applications, precision and trustworthiness in measurements are crucial for informed decision-making and maintaining high-quality processes. Instruments such as nitrate sensors are particularly susceptible to environmental conditions, calibration drift, high maintenance costs, and a tendency to become less reliable over time due to aging. The lifespan of these instruments can be as short as two weeks, posing significant challenges. To overcome these limitations, researchers have turned to advanced computational methods, including mathematical modeling, statistical analysis, and machine learning. Traditional methods have had some success, but they often struggle to fully capture the complex dynamics of natural environments. This has led to increased interest in more sophisticated approaches, such as deep learning techniques. Deep learning-based soft sensors have shown promise in outperforming traditional methods in many applications by achieving higher accuracy. However, they are often criticized as "black-box" models due to their lack of transparency. This raises questions about their reliability and trustworthiness, making it critical to assess these aspects. This thesis presents a comprehensive framework for deep learning-based soft sensors. The framework will quantify the robustness of soft sensors by estimating predictive uncertainty and evaluating performance across a range of contextual scenarios, such as weather conditions, flood events, and water parameters. These evaluations will help define the trustworthiness of the soft sensor and facilitate comparisons between hard and soft sensors. To validate the framework, we will conduct experiments using data generated by ACWA, a cyber-physical system water-controlled environment testbed we developed. This will provide a controlled environment to test and refine our framework. Subsequently, we will test the framework on real-world environment data from AlexRenew. This will further establish its applicability and effectiveness in practical settings, providing a robust and reliable tool for sensor data analysis and prediction. Ultimately, this work aims to contribute to the broader field of sensor technology, enhancing our ability to make informed decisions based on reliable and accurate sensor data.
203

Enhancing data-driven marketing through sales-marketing knowledge exchange and collaboration: a dynamic capability perspective : A case study of a high-tech process automation company

Haga, Viktor January 2024 (has links)
Purpose - This study explores how knowledge exchange between sales and marketing can enhance data-driven marketing initiatives for firms in the high-tech process industry. Additionally, the study aims to identify the factors that drive alignment and collaboration between sales and marketing interfaces from a dynamic capability perspective. Method - This master's thesis is an exploratory study with an inductive approach. 10 qualitative interviews were conducted with employees from a high-tech process automation company, specifically those working in marketing and sales roles. The interviews follow a semi-structured approach, and a thematic analysis was performed to examine the empirical findings. Findings - The study emphasizes the significance of collaboration, knowledge exchange, and functional alignment between sales and marketing, in the context of data-driven marketing and the sales lead generation. By applying a dynamic capability framework, the study sheds light on how firms can leverage knowledge exchange and functional alignment to capitalize on market opportunities and gain a competitive advantage.  Theoretical and practical contributions - The study delves into the underexplored realm of data-driven marketing within the high-tech process industry, particularly focusing on the intricate dynamics between sales and marketing functions during the lead generation process. Through its analysis, the research not only enriches theoretical understanding but also offers practical insights for managers in the high-tech process industry, providing some recommendations to enhance collaboration, knowledge exchange, and optimize data-driven marketing initiatives. / Syfte - Denna studie undersöker hur kunskapsutbyte mellan försäljning och marknadsföring kan förbättra datadrivna marknadsföring initiativ för företag inom högteknologisk processindustri. Dessutom syftar studien till att identifiera de faktorer som driver linjering och samarbete mellan försäljning och marknadsföring, ur ett perspektiv från teorier inom Dynamic Capabilities.  Metod - Denna uppsats är en explorativ studie med en induktiv ansats. 10 kvalitativa intervjuer har genomförts med anställda på ett företag inom högteknologisk process automation, specifikt de som arbetar inom marknadsföring och försäljning. Intervjuerna har följt en semi-strukturerad metod och en tematisk analys har genomförts för att undersöka de empiriska resultaten. Resultat - Studien betonar vikten av samarbete, kunskapsutbyte och funktionell linjeringen mellan försäljning och marknadsföring, i kontexten av datadriven marknadsföring och sales lead generation. I studien har teorier inom Dynamic Capabilities belyst studien, mer specifikt hur företag kan utnyttja kunskapsutbyte och funktionell linjeringen för att utnyttja marknadsmöjligheter och skapa konkurrensfördelar. Teoretiska och praktiska bidrag - Studien utforskar det underutvecklade området datadriven marknadsföring inom högteknologisk processindustri, med särskilt fokus på de komplexa dynamikerna mellan försäljnings- och marknadsförings funktioner under processen för sales lead generation. Genom sin analys berikar forskningen inte bara den teoretiska utan erbjuder också praktiska insikter för chefer inom högteknologisk processindustri, med rekommendationer för att förbättra samarbete, kunskapsutbyte och optimera datadrivna marknadsföring initiativ.
204

Tillit för och använding av Artificiell Intelligence som verktyg : En kvalitativ studie om tillits inverkan påanvändning av artificiell intelligens

Sandell, Ludvig, Ljung, Edvin January 2024 (has links)
Artificial intelligence is a technology that many individuals view favourably intoday's world. Where the technology contributes to a variety of benefits when usedthat can help individuals perform specific tasks. It has been shown from previousresearch that individuals' willingness to use artificial intelligence is affected by thetrust they have in the technology. It discusses how working methods and processesare affected by artificial intelligence and how it is of utmost importance to promotethe individual's trust in the technology to later promote the way artificialintelligence is used. In the previous research, various methods and models havebeen identified and demonstrated to measure and build appropriate trust in AI aswell as for various variables that affect the individual's trust and thus willingnessto use artificial intelligence as a technology.The study aims to study trust in and use of artificial intelligence from a user'sperspective where the individual is in focus. The study thus aims to study theimpact of trust on the use of the technology, any variables that affect users' trustin the technology and what is important in the implementation and use of artificialintelligence to promote trust and the use of artificial intelligence. This is tocontribute with nuanced knowledge of what is required to promote effectivecollaboration between humans and artificial intelligence when performing varioustasks.The study contains a qualitative and inductive approach where empirical data wascollected through open individual interviews and observations where respondentsinteracted with an AI tool and where they performed simple use cases. Throughcontent analysis, the empirical material has created a nuanced andknowledgeexpanding view of the phenomenon and identified aspects andvariables with an impact on individuals' trust and use of artificial intelligence.These are areas that, according to respondents, have a major impact on decisionsto use or not use artificial intelligence tools.The results of the study show that there are aspects and variables around trust thathave a major impact on decisions to use AI tools. These are thus important to keepin mind when implementing AI tools to promote trust and thus the use of artificialintelligence
205

Pilotage de la performance des projets de science citoyenne dans un contexte de transformation du rapport aux données scientifiques : systématisation et perte de production / Managing performance of citizen science projects in a context of scientific data transformation : systematization and production loss

Sitruk, Yohann 03 July 2019 (has links)
De plus en plus d’organisations scientifiques contemporaines intègrent dans leur processus des foules de participants assignés à des tâches variées, souvent appelés projets de science citoyenne. Ces foules sont une opportunité dans un contexte lié à une avalanche de données massives qui met les structures scientifiques face à leurs limites en terme de ressources et en capacités. Mais ces nouvelles formes de coopération sont déstabilisées par leur nature même dès lors que les tâches déléguées à la foule demandent une certaine inventivité - résoudre des problèmes, formuler des hypothèses scientifiques - et que ces projets sont amenés à être répétés dans l’organisation. A partir de deux études expérimentales basées sur une modélisation originale, cette thèse étudie les mécanismes gestionnaires à mettre en place pour assurer la performance des projets délégués à la foule. Nous montrons que la performance est liée à la gestion de deux types de capitalisation : une capitalisation croisée (chaque participant peut réutiliser les travaux des autres participants) ; une capitalisation séquentielle (capitalisation par les participants puis par les organisateurs). Par ailleurs cette recherche met en avant la figure d’une nouvelle figure managériale pour supporter la capitalisation, le « gestionnaire des foules inventives », indispensable pour le succès des projets. / A growing number of contemporary scientific organizations collaborate with crowds for diverse tasks of the scientific process. These collaborations are often designed as citizen science projects. The collaboration is an opportunity for scientific structures in a context of massive data deluge which lead organizations to face limits in terms of resources and capabilities. However, in such new forms of cooperation a major crisis is caused when tasks delegated to the crowd require a certain inventiveness - solving problems, formulating scientific hypotheses - and when these projects have to be repeated in the organization. From two experimental studies based on an original modeling, this thesis studies the management mechanisms needed to ensure the performance of projects delegated to the crowd. We show that the performance is linked to the management of two types of capitalization: a cross-capitalization (each participant can reuse the work of the other participants); a sequential capitalization (capitalization by the participants then by the organizers). In addition, this research highlights the figure of a new managerial figure to support the capitalization, the "manager of inventive crowds", essential for the success of the projects.
206

[en] DATA-DRIVEN ROBUST OPTIMIZATION MODEL APPLIED FOR FIXED INCOME ALLOCATION / [pt] MODELO DE OTIMIZAÇÃO ROBUSTA ORIENTADO POR DADOS APLICADO NA ALOCAÇÃO DE RENDA FIXA

14 July 2020 (has links)
[pt] Este trabalho propõe um modelo de otimização robusta de pior caso orientado por dados aplicado na seleção de um portfólio de títulos de renda fixa. A gestão das carteiras implica na tomada de decisões financeiras e no gerenciamento do risco através da seleção ótima de ativos com base nos retornos esperados. Como estes são variáveis aleatórias incertas foi incluído um conjunto definido de incertezas estimadas diretamente no processo de otimização, chamados de cenários. Foi usado o modelo de ajuste de curvas Nelson e Siegel para construir as estruturas a termo das taxas de juros empregadas na precificação dos títulos, um ativo livre de risco e alguns ativos com risco de maturidades diferentes. Os títulos prefixados são marcados a mercado porque estão sendo negociados antes do prazo de vencimento. A implementação ocorreu pela simulação computacional usando dados de mercado e dados estimados que alimentaram o modelo.Com a modelagem de otimização robusta foram realizados diferentes testes como: analisar a sensibilidade do modelo frente às variações dos parâmetros verificando seus resultados e a utilização de um horizonte de janela rolante para simular o comportamento ao longo do tempo. Obtidas as composições ótimas das carteiras, foi feito o backtesting para avaliar o comportamento das alocações com o retorno real e também a comparação com o desempenho de umbenchmark. Os resultados dos testes mostraram a adequação do modelo da curva de juros e bons resultados de alocação do portfólio robusto, que apresentaram confiabilidade até em períodos de crise. / [en] This paper proposes a data-driven worst case robust optimization model applied in the selection of a portfolio of fixed income securities. The portfolio management implies in financial decision-making and risk management through the selection of optimal assets based on expected returns. As these are uncertain random variables, was included a defined set of estimated uncertainties directly in the optimization process, called scenarios. The Nelson and Siegel curve fitting model was used to construct the term structure of the interest rates employed in the pricing of securities, a risk-free asset and some risky assets of different maturities. The fixed-rate securities are marked to market because they are being traded before the maturity date. The implementation took place through computational simulation using market data and estimated data that fed the model. With robust optimization modeling were done different tests such as: analyze the sensitivity of the model to the variations of the parameters checking the results and the use of a rolling horizon scheme to simulate behavior over time. Once the optimal portfolio composition was obtained, the backtesting was done to evaluate the behavior of the allocations with the real return and also the comparison with the performance of a benchmark. The results of the tests showed the adequacy of the interest curve model and good allocation results of the robust portfolio, which presented reliability even in times of crisis.
207

A study of transfer learning on data-driven motion synthesis frameworks / En studie av kunskapsöverföring på datadriven rörelse syntetiseringsramverk

Chen, Nuo January 2022 (has links)
Various research has shown the potential and robustness of deep learning-based approaches to synthesise novel motions of 3D characters in virtual environments, such as video games and films. The models are trained with the motion data that is bound to the respective character skeleton (rig). It inflicts a limitation on the scalability and the applicability of the models since they can only learn motions from one particular rig (domain) and produce motions in that domain only. Transfer learning techniques can be used to overcome this issue and allow the models to better adapt to other domains with limited data. This work presents a study of three transfer learning techniques for the proposed Objective-driven motion generation model (OMG), which is a model for procedurally generating animations conditioned on positional and rotational objectives. Three transfer learning approaches for achieving rig-agnostic encoding (RAE) are proposed and experimented with: Feature encoding (FE), Feature clustering (FC) and Feature selection (FS), to improve the learning of the model on new domains with limited data. All three approaches demonstrate significant improvement in both the performance and the visual quality of the generated animations, when compared to the vanilla performance. The empirical results indicate that the FE and the FC approaches yield better transferring quality than the FS approach. It is inconclusive which of them performs better, but the FE approach is more computationally efficient, which makes it the more favourable choice for real-time applications. / Många studier har visat potentialen och robustheten av djupinlärningbaserade modeller för syntetisering av nya rörelse för 3D karaktärer i virtuell miljö, som datorspel och filmer. Modellerna är tränade med rörelse data som är bunden till de respektive karaktärskeletten (rig). Det begränsar skalbarheten och tillämpningsmöjligheten av modellerna, eftersom de bara kan lära sig av data från en specifik rig (domän) och därmed bara kan generera animationer i den domänen. Kunskapsöverföringsteknik (transfer learning techniques) kan användas för att överkomma denna begränsning och underlättar anpassningen av modeller på nya domäner med begränsade data. I denna avhandling presenteras en studie av tre kunskapsöverföringsmetoder för den föreslagna måldriven animationgenereringsnätverk (OMG), som är ett neural nätverk-baserad modell för att procedurellt generera animationer baserade på positionsmål och rotationsmål. Tre metoder för att uppnå rig-agnostisk kodning är presenterade och experimenterade: Feature encoding (FE), Feature clustering (FC) and Feature selection (FS), för att förbättra modellens lärande på nya domäner med begränsade data. All tre metoderna visar signifikant förbättring på både prestandan och den visuella kvaliteten av de skapade animationerna, i jämförelse med den vanilla prestandan. De empiriska resultaten indikerar att både FE och FC metoderna ger bättre överföringskvalitet än FS metoden. Det går inte att avgöra vilken av de presterar bättre, men FE metoden är mer beräkningseffektiv, vilket är fördelaktigt för real-time applikationer.
208

Improving data-driven decision making through data democracy : Case study of a Swedish bank

Amerian, Irsa January 2021 (has links)
Nowadays, becoming data-driven is the vision of almost all organizations. However, achieving this vision is not as easy as it may look like and there are many factors that affect, enable, support and sustain the data-driven ecosystem in an organization. Among these factors, this study focuses on data democracy which can be defined as the intra-organizational open data that aims to empower the employees getting faster and easier access to data in order to benefit from the business insight they need without the interfere of external help.  In the existing literature, while the importance of becoming data-driven has been widely discussed, when it comes to data democracy within organizations, there is a noticeable gap. As a result, this master’s thesis aims to justify the importance and role of the data democracy in becoming a data-driven organization, focusing on the case of a Swedish bank. Additionally, it intends to provide extra investigation on the role of data analytics tools in achieving data democracy.  The results of the study show that there is a strong connection between the benefits of the empowering different actors of the organization with the needed data knowledge, and the speeding up of the data-driven transformation journey. Based on the study, shared data and the availability of data to a larger number of stakeholders inside an organization result into a better understanding of different aspects of the problems, simplify the data-driven decision making and make the organization more data-driven. In the process of becoming data-driven, the organizations should provide the analytics tools not only to the data specialists but even to the non-data technical people. And by offering the needed support, training and collaboration possibilities between the two groups of employees (data specialists and non-data specialists), it should be attempted to enable the second group to extract the insight from the data, independently from the help of the data scientists.  An organization can succeed in the path of becoming data-driven when they invest on the reusable capabilities of its employees, by discovering the data science skills across various departments and turning their domain experts into citizen data scientists of the organization.
209

“Jag tror att man som företag säger sig vara ganska datadriven i sina beslut” : En kvalitativ studie om Business intelligence och datadrivenhet i ett svenskt konsultföretag / “I believe that as a company you say you are quite data-driven in your decisions" : A qualitative study about Business Intelligence and data-driveness at a consulting company

Johansson, Antonia, Lindgren, Filip January 2020 (has links)
In an increasingly digital world, companies need to be at the forefront of development in order to gain market share and be competitive. Therefore this qualitative case study intends to investigate how Business intelligence should be implemented to increase the technology acceptance among the employees. Furthermore, it is investigated how data-driven a consulting company in Sweden is. An important factor when Business intelligence is about to be implemented and applied are the employees and the company culture. It is important to normalize the collection of data, in order to create a data culture where high quality data is collected. What is more, Business intelligence is strongly dependent on the data collected maintaining a high data quality, in order to be able to create relevant reports and thereby be able to support various decision-making. When implementing a new platform, many employees are affected. This means that the platform can generate both positive and negative reactions. / I en allt mer digitaliserad värld krävs det att företag ligger långt fram i utvecklingen för att kunna ta marknadsandelar och vara konkurrenskraftiga. Därmed ämnar denna kvalitativa fallstudie att undersöka hur Business intelligence kan implementeras för att öka acceptansen för IT-stöd hos de anställda, samt hur datadrivet ett svenskt konsultföretag är. En viktig faktor när Business intelligence ska implementeras och appliceras är de anställda och den kultur som företaget har. Det är viktigt att normalisera insamlandet av data, för att i förlängningen skapa en datakultur där data med hög kvalité samlas in. Vidare är Business intelligence starkt beroende av att den data som samlas in håller en hög datakvalité, för att kunna skapa relevanta rapporter och därigenom kunna ge förslag inför olika beslutsfattande. Vid en implementering av en ny plattform, är det många anställda som berörs. Det betyder att plattformen kan generera såväl positiva som negativa reaktioner.
210

Data at your service! : A case study of utilizing in-service data to support the B2B sales process at a large information and communications technology company

Wendin, Ingrid, Bark, Per January 2021 (has links)
The digitalization of our society and the creation of data intense industries are transforming how industrial sales can be made. Large volumes of data are generated when businesses and people use digital products and services which are available in the modern world. Some of this data describes the digital products and services when they are in use, i.e., it is in-service data. Furthermore, data has during the last decade been seen as an asset which can improve decision-making and has made sales activities become increasingly customer specific. The purpose of this study was to explore how knowledge from in-service data can serve B2B selling. To realize this purpose the following three research questions were answered by conducting a single case study of a large company in the information and communications technology (ICT) industry. (RQ1) How does a company in a data intense industry use knowledge from in-service data in the B2B sales process? (RQ2) What opportunities does knowledge from in-service data create in the B2B sales process? (RQ3) What challenges hinder a company from using knowledge from in-service data in the B2B sales process? RQ1: This study has concluded that, in the context of a data intense industry, throughout the steps in the B2B sales process, knowledge from in-service data is actively used by the sales team, however, to varying degrees. In-service data is used in six categories of sales activities: (1) to understand the customer in terms of their technical and strategical needs, which enables lead generation and cross-selling, (2) to make information from in-service data available through data collection, storage, and analyses, (3) to nurture the relationship between buyer and seller by creating understanding, trust and satisfactory offers to the customer, (4) to present solutions with convincing arguments, (5) to solve problems and satisfy the customer’s needs, and (6) to provide post-sale value-adding services. Moreover, three general resources which are used in the activities were identified: An audit report which presents the information of the data, a plan which presents strategic expansions of the solution, and simulations of the solution. Furthermore, four general actors who are performing the activities were identified: the Key Account Manager (KAM) who is responsible for conducting the sales interactions with the customer, the sales team, and the presales team who both support the KAM, and the customer. In addition to the general resources and actors, companies may use step-specific resources and actors. RQ2: Four categories of opportunities were identified: knowledge from in-service data (1) assists KAMs in discovering customer needs, (2) guides the KAM in creating better customer specific solutions, (3) helps the KAM move the sale faster through the sales process, and (4) assists the company in becoming a true partner who provides strategic services, rather than acting as a supplier. RQ3: Finally, four categories of challenges were identified: (1) organizational, (2) technological, (3) cultural, and (4) legal & security. Out of these, obtaining access to the data was identified as the greatest challenge to use in-service data. The opportunities and the challenge to access data are deemed to be general for companies in data intense industries, while the other challenges are depending on the structure, size, and culture of the individual company. The findings of this study contribute to a general understanding of how companies in data intense industries may use knowledge from in-service data, what opportunities this data create for their B2B sales process, and which challenges they face when they pursue activities which use the knowledge from in-service data. To conclude, in-service data serves B2B selling especially as a source of customer knowledge. It is used by salespeople to understand the customer in terms of its technical and strategical needs and salespeople use this knowledge to conduct various customer-oriented sales activities. In-service data creates several opportunities in B2B sales. However, several challenges must be overcome to seize the opportunities. Especially the question of data access.

Page generated in 0.0719 seconds