391 |
Privacy preserving software engineering for data driven developmentTongay, Karan Naresh 14 December 2020 (has links)
The exponential rise in the generation of data has introduced many new areas of research including data science, data engineering, machine learning, artificial in- telligence to name a few. It has become important for any industry or organization to precisely understand and analyze the data in order to extract value out of the data. The value of the data can only be realized when it is put into practice in the real world and the most common approach to do this in the technology industry is through software engineering. This brings into picture the area of privacy oriented software engineering and thus there is a rise of data protection regulation acts such as GDPR (General Data Protection Regulation), PDPA (Personal Data Protection Act), etc. Many organizations, governments and companies who have accumulated huge amounts of data over time may conveniently use the data for increasing business value but at the same time the privacy aspects associated with the sensitivity of data especially in terms of personal information of the people can easily be circumvented while designing a software engineering model for these types of applications. Even before the software engineering phase for any data processing application, often times there can be one or many data sharing agreements or privacy policies in place. Every organization may have their own way of maintaining data privacy practices for data driven development. There is a need to generalize or categorize their approaches into tactics which could be referred by other practitioners who are trying to integrate data privacy practices into their development. This qualitative study provides an understanding of various approaches and tactics that are being practised within the industry for privacy preserving data science in software engineering, and discusses a tool for data usage monitoring to identify unethical data access. Finally, we studied strategies for secure data publishing and conducted experiments using sample data to demonstrate how these techniques can be helpful for securing private data before publishing. / Graduate
|
392 |
IT’S IN THE DATA : A multimethod study on how SaaS-businesses can utilize cohort analysis to improve marketing decision-makingFridell, Gustav, Cedighi Chafjiri, Saam January 2020 (has links)
Incorporating data and analytics within marketing decision-making is today crucial for a company’s success. This holds true especially for SaaS-businesses due to having a subscription-based pricing model dependent on good service retention for long- term viability and profitability. Efficiently incorporating data and analytics does have its prerequisites but can for SaaS-businesses be achieved using the analytical framework of cohort analysis, which utilizes subscription data to obtain actionable insights on customer behavior and retention patterns. Consequently, to expand upon the understanding of how SaaS-businesses can utilize data-driven methodologies to improve their operations, this study has examined how SaaS-businesses can utilize cohort analysis to improve marketing decision-making and what the prerequisites are for efficiently doing so. Thus, by utilizing a multimethodology approach consisting of action research and a single caste study on the fast-growing SaaS-company GetAccept, the study has concluded that the incorporation and utilization of cohort analysis can improve marketing decision-making for SaaS-businesses. This conclusion is drawn by having identified that: The incorporation of cohort analysis can streamline the marketing decision-making process; and The incorporation of cohort analysis can enable decision-makers to obtain a better foundation of information to base marketing decisions upon, thus leading to an improved expected outcome of the decisions. Furthermore, to enable efficient data-driven marketing decision-making and effectively utilize methods such as cohort analysis, the study has concluded that SaaS- businesses need to fulfill three prerequisites, which have been identified to be: Management that support and advocate for data and analytics; A company culture built upon information sharing and evidence-based decision-making; and A large enough customer base to allow for determining similarities within and differences between customer segments as significant. However, the last prerequisite applies specifically for methods such as or similar to cohort analysis. Thus, by utilizing other methods, SaaS-businesses might still be able to efficiently utilize data-driven marketing decision-making, as long as the first two prerequisites are fulfilled.
|
393 |
A hybrid prognostic methodology and its application to well-controlled engineering systemsEker, Ömer F. January 2015 (has links)
This thesis presents a novel hybrid prognostic methodology, integrating physics-based and data-driven prognostic models, to enhance the prognostic accuracy, robustness, and applicability. The presented prognostic methodology integrates the short-term predictions of a physics-based model with the longer term projection of a similarity-based data-driven model, to obtain remaining useful life estimations. The hybrid prognostic methodology has been applied on specific components of two different engineering systems, one which represents accelerated, and the other a nominal degradation process. Clogged filter and fatigue crack propagation failure cases are selected as case studies. An experimental rig has been developed to investigate the accelerated clogging phenomena whereas the publicly available Virkler fatigue crack propagation dataset is chosen after an extensive literature search and dataset analysis. The filter clogging experimental rig is designed to obtain reproducible filter clogging data under different operational profiles. This data is thought to be a good benchmark dataset for prognostic models. The performance of the presented methodology has been evaluated by comparing remaining useful life estimations obtained from both hybrid and individual prognostic models. This comparison has been based on the most recent prognostic evaluation metrics. The results show that the presented methodology improves accuracy, robustness and applicability. The work contained herein is therefore expected to contribute to scientific knowledge as well as industrial technology development.
|
394 |
L’infrastructure de la science citoyenne : le cas eBirdPaniagua, Alejandra 04 1900 (has links)
Cette recherche explore comment l’infrastructure et les utilisations d’eBird, l’un des plus grands projets de science citoyenne dans le monde, se développent et évoluent dans le temps et l’espace. Nous nous concentrerons sur le travail d’eBird avec deux de ses partenaires latino-américains, le Mexique et le Pérou, chacun avec un portail Web géré par des organisations locales. eBird, qui est maintenant un grand réseau mondial de partenariats, donne occasion aux citoyens du monde entier la possibilité de contribuer à la science et à la conservation d’oiseaux à partir de ses observations téléchargées en ligne. Ces observations sont gérées et gardées dans une base de données qui est unifiée, globale et accessible pour tous ceux qui s’intéressent au sujet des oiseaux et sa conservation. De même, les utilisateurs profitent des fonctionnalités de la plateforme pour organiser et visualiser leurs données et celles d’autres.
L’étude est basée sur une méthodologie qualitative à partir de l’observation des plateformes Web et des entrevues semi-structurées avec les membres du Laboratoire d’ornithologie de Cornell, l’équipe eBird et les membres des organisations partenaires locales responsables d’eBird Pérou et eBird Mexique. Nous analysons eBird comme une infrastructure qui prend en considération les aspects sociaux et techniques dans son ensemble, comme un tout. Nous explorons aussi à la variété de différents types d’utilisation de la plateforme et de ses données par ses divers utilisateurs. Trois grandes thématiques ressortent : l’importance de la collaboration comme une philosophie qui sous-tend le développement d’eBird, l’élargissement des relations et connexions d’eBird à travers ses partenariats, ainsi que l’augmentation de la participation et le volume des données. Finalement, au fil du temps on a vu une évolution des données et de ses différentes utilisations, et ce qu’eBird représente comme infrastructure. / This research explores the evolution of the infrastructure and uses of eBird, one of the world’s largest citizen science projects. It concentrates on the work of eBird with two of its local partners in Latin America who manage regional portals in Mexico and Peru. eBird allows users throughout the world to contribute their observations of birds online and so to advance the case of science and conservation. These observations are stored and managed in a unified, global database that is freely accessible to all who are interested in birds and their conservation. Participants can use the platform’s various functionalities to organize and visualize their data as well as that of others.
The research follows a qualitative methodology based on observation of the eBird platform and on interviews with members of the Cornell Lab of Ornithology, the eBird team and members of local organizations responsible for eBird in Peru and Mexico. We analyze eBird as an infrastructure whose technical and social sides are interrelated and need to be examined simultaneously. We also explore how the eBird team conceives the uses of the eBird platform and the data it contains. Three major themes emerge: the philosophy of collaboration underlying the development of eBird, the extension and diversification of eBird through its network of partnerships and a corresponding increase in both participation and volume of data. Finally, we also observe an evolution in the type and variety of uses for eBird observations and the eBird infrastructure itself.
|
395 |
Efficient Bayesian methods for mixture models with genetic applications / Métodos Bayesianos eficientes para modelos de mistura com aplicações em genéticaZuanetti, Daiane Aparecida 14 December 2016 (has links)
We propose Bayesian methods for selecting and estimating different types of mixture models which are widely used inGenetics and MolecularBiology. We specifically propose data-driven selection and estimation methods for a generalized mixture model, which accommodates the usual (independent) and the first-order (dependent) models in one framework, and QTL (quantitativetrait locus) mapping models for independent and pedigree data. For clustering genes through a mixture model, we propose three nonparametric Bayesian methods: a marginal nested Dirichlet process (NDP), which is able to cluster distributions and, a predictive recursion clustering scheme (PRC) and a subset nonparametric Bayesian (SNOB) clustering algorithm for clustering bigdata. We analyze and compare the performance of the proposed methods and traditional procedures of selection, estimation and clustering in simulated and real datasets. The proposed methods are more flexible, improve the convergence of the algorithms and provide more accurate estimates in many situations. In addition, we propose methods for estimating non observable QTLs genotypes and missing parents and improve the Mendelian probability of inheritance of nonfounder genotype using conditional independence structures.We also suggest applying diagnostic measures to check the goodness of fit of QTLmappingmodels. / Nos propomos métodos Bayesianos para selecionar e estimar diferentes tipos de modelos de mistura que são amplamente utilizados em Genética e Biologia Molecular. Especificamente, propomos métodos direcionados pelos dados para selecionar e estimar um modelo de mistura generalizado, que descreve o modelo de mistura usual (independente) e o de primeira ordem numa mesma estrutura, e modelos de mapeamento de QTL com dados independentes e familiares. Para agrupar genes através de modelos de mistura, nos propomos três métodos Bayesianos não-paramétricos: o processo de Dirichlet aninhado que possibilita agrupamento de distribuições e, um algoritmo preditivo recursivo e outro Bayesiano não- paramétrico exato para agrupar dados de alta dimensão. Analisamos e comparamos o desempenho dos métodos propostos e dos procedimentos tradicionais de seleção e estimação de modelos e agrupamento de dados em conjuntos de dados simulados e reais. Os métodos propostos são mais flexíveis, aprimoram a convergência dos algoritmos e apresentam estimativas mais precisas em muitas situações. Além disso, nos propomos procedimentos para estimar o genótipo não observável dos QTL se de pais faltantes e melhorar a probabilidade Mendeliana de herança genética do genótipo dos descendentes através da estrutura condicional de independência entre as variáveis. Também sugerimos aplicar medidas de diagnóstico para verificar a qualidade do ajuste dos modelos de mapeamento de QTLs.
|
396 |
Sélection de variables pour la classification non supervisée en grande dimension / Variable selection in model-based clustering for high-dimensional dataMeynet, Caroline 09 November 2012 (has links)
Il existe des situations de modélisation statistique pour lesquelles le problème classique de classification non supervisée (c'est-à-dire sans information a priori sur la nature ou le nombre de classes à constituer) se double d'un problème d'identification des variables réellement pertinentes pour déterminer la classification. Cette problématique est d'autant plus essentielle que les données dites de grande dimension, comportant bien plus de variables que d'observations, se multiplient ces dernières années : données d'expression de gènes, classification de courbes... Nous proposons une procédure de sélection de variables pour la classification non supervisée adaptée aux problèmes de grande dimension. Nous envisageons une approche par modèles de mélange gaussien, ce qui nous permet de reformuler le problème de sélection des variables et du choix du nombre de classes en un problème global de sélection de modèle. Nous exploitons les propriétés de sélection de variables de la régularisation l1 pour construire efficacement, à partir des données, une collection de modèles qui reste de taille raisonnable même en grande dimension. Nous nous démarquons des procédures classiques de sélection de variables par régularisation l1 en ce qui concerne l'estimation des paramètres : dans chaque modèle, au lieu de considérer l'estimateur Lasso, nous calculons l'estimateur du maximum de vraisemblance. Ensuite, nous sélectionnons l'un des ces estimateurs du maximum de vraisemblance par un critère pénalisé non asymptotique basé sur l'heuristique de pente introduite par Birgé et Massart. D'un point de vue théorique, nous établissons un théorème de sélection de modèle pour l'estimation d'une densité par maximum de vraisemblance pour une collection aléatoire de modèles. Nous l'appliquons dans notre contexte pour trouver une forme de pénalité minimale pour notre critère pénalisé. D'un point de vue pratique, des simulations sont effectuées pour valider notre procédure, en particulier dans le cadre de la classification non supervisée de courbes. L'idée clé de notre procédure est de n'utiliser la régularisation l1 que pour constituer une collection restreinte de modèles et non pas aussi pour estimer les paramètres des modèles. Cette étape d'estimation est réalisée par maximum de vraisemblance. Cette procédure hybride nous est inspirée par une étude théorique menée dans une première partie dans laquelle nous établissons des inégalités oracle l1 pour le Lasso dans les cadres de régression gaussienne et de mélange de régressions gaussiennes, qui se démarquent des inégalités oracle l0 traditionnellement établies par leur absence totale d'hypothèse. / This thesis deals with variable selection for clustering. This problem has become all the more challenging since the recent increase in high-dimensional data where the number of variables can largely exceeds the number of observations (DNA analysis, functional data clustering...). We propose a variable selection procedure for clustering suited to high-dimensional contexts. We consider clustering based on finite Gaussian mixture models in order to recast both the variable selection and the choice of the number of clusters into a global model selection problem. We use the variable selection property of l1-regularization to build a data-driven model collection in a efficient way. Our procedure differs from classical procedures using l1-regularization as regards the estimation of the mixture parameters: in each model of the collection, rather than considering the Lasso estimator, we calculate the maximum likelihood estimator. Then, we select one of these maximum likelihood estimators by a non-asymptotic penalized criterion. From a theoretical viewpoint, we establish a model selection theorem for maximum likelihood estimators in a density estimation framework with a random model collection. We apply it in our context to determine a convenient penalty shape for our criterion. From a practical viewpoint, we carry out simulations to validate our procedure, for instance in the functional data clustering framework. The basic idea of our procedure, which consists in variable selection by l1-regularization but estimation by maximum likelihood estimators, comes from theoretical results we establish in the first part of this thesis: we provide l1-oracle inequalities for the Lasso in the regression framework, which are valid with no assumption at all contrary to the usual l0-oracle inequalities in the literature, thus suggesting a gap between l1-regularization and l0-regularization.
|
397 |
Reliable Information Exchange in IIoT : Investigation into the Role of Data and Data-Driven ModellingLavassani, Mehrzad January 2018 (has links)
The concept of Industrial Internet of Things (IIoT) is the tangible building block for the realisation of the fourth industrial revolution. It should improve productivity, efficiency and reliability of industrial automation systems, leading to revenue growth in industrial scenarios. IIoT needs to encompass various disciplines and technologies to constitute an operable and harmonious system. One essential requirement for a system to exhibit such behaviour is reliable exchange of information. In industrial automation, the information life-cycle starts at the field level, with data collected by sensors, and ends at the enterprise level, where that data is processed into knowledge for business decision making. In IIoT, the process of knowledge discovery is expected to start in the lower layers of the automation hierarchy, and to cover the data exchange between the connected smart objects to perform collaborative tasks. This thesis aims to assist the comprehension of the processes for information exchange in IIoT-enabled industrial automation- in particular, how reliable exchange of information can be performed by communication systems at field level given an underlying wireless sensor technology, and how data analytics can complement the processes of various levels of the automation hierarchy. Furthermore, this work explores how an IIoT monitoring system can be designed and developed. The communication reliability is addressed by proposing a redundancy-based medium access control protocol for mission-critical applications, and analysing its performance regarding real-time and deterministic delivery. The importance of the data and the benefits of data analytics for various levels of the automation hierarchy are examined by suggesting data-driven methods for visualisation, centralised system modelling and distributed data streams modelling. The design and development of an IIoT monitoring system are addressed by proposing a novel three-layer framework that incorporates wireless sensor, fog, and cloud technologies. Moreover, an IIoT testbed system is developed to realise the proposed framework. The outcome of this study suggests that redundancy-based mechanisms improve communication reliability. However, they can also introduce drawbacks, such as poor link utilisation and limited scalability, in the context of IIoT. Data-driven methods result in enhanced readability of visualisation, and reduced necessity of the ground truth in system modelling. The results illustrate that distributed modelling can lower the negative effect of the redundancy-based mechanisms on link utilisation, by reducing the up-link traffic. Mathematical analysis reveals that introducing fog layer in the IIoT framework removes the single point of failure and enhances scalability, while meeting the latency requirements of the monitoring application. Finally, the experiment results show that the IIoT testbed works adequately and can serve for the future development and deployment of IIoT applications. / SMART (Smarta system och tjänster för ett effektivt och innovativt samhälle)
|
398 |
Controle preditivo com enfoque em subespaços. / Subspace predictive control.Fernandez, Erika Maria Francischinelli 27 November 2009 (has links)
Controle preditivo baseado em modelos (MPC) é uma técnica de controle amplamente utilizada na indústria de processos químicos. Por outro lado, o método de identificação em subespaços (SID) tem se mostrado uma alternativa eficiente para os métodos clássicos de identificação de sistemas. Pela combinação dos conceitos de MPC e SID, surgiu, no final da década de 90, uma nova técnica de controle, denominada controle preditivo com enfoque em subespaços (SPC). Essa técnica também é conhecida como controle preditivo orientado a dados. Ela substitui por um único passo as três etapas do projeto de um MPC: a identificação do modelo, o cálculo do observador de estados e a construção das matrizes de predição. Este trabalho tem como principal objetivo revisar estudos feitos na área de SPC, aplicar esse método em sistemas típicos da indústria química e propor novos algoritmos. São desenvolvidos três algoritmos de excitação interna para o método SPC, que permitem gerar dados persistentemente excitantes enquanto um controle mínimo do processo é garantido. Esses algoritmos possibilitam aplicar identificação em malha fechada, na qual o modelo do controlador SPC é reidentificado utilizando dados previamente excitados. Os controladores SPC e SPC com excitação interna são testados e comparados ao MPC por meio de simulações em dois processos distintos. O primeiro consiste em uma coluna debutanizadora de uma unidade de destilação, para a qual são disponibilizados dois modelos lineares referentes a pontos de operação diferentes. O segundo é um reator de polimerização de estireno com dinâmica não linear, cujo modelo fenomenológico é conhecido. Os resultados dos testes indicam que o SPC é mais suscetível a ruídos de medição. Entretanto, verifica-se que esse controlador corrige perturbações nos set-points das variáveis controladas mais rapidamente que o MPC. Simulações realizadas para o SPC com excitação interna mostram que os algoritmos propostos neste trabalho excitam o sistema satisfatoriamente, de modo que modelos mais precisos são obtidos na reidentificação com os dados excitados. / Model Predictive Control (MPC) technology is widely used in chemical process industries. Subspace identification (SID) on the other hand has proven to be an efficient alternative for classical system identification methods. Based on the results from MPC and SID, it was developed in the late 90s a new control approach, called Subspace Predictive Control (SPC). This approach is also known as data-driven predictive control. In this new method, one single operation replaces the three steps in a MPC controller design: system identification, the state observer design and the predictor matrices construction. The aim of this work is to review studies in the field of SPC, to apply this technology to typical systems of chemical industry and to propose new algorithms. It is developed three internal excitation algorithms for the SPC method, which allow the system to be persistently excited while a minimal control of the process is still guaranteed. These algorithms enable the application of closedloop identification, where the SPC controller model is re-identified using the previously excited data. The SPC controller and the SPC controller with internal excitation are tested through simulation for two different processes. The first one is a debutanizer column of a distillation unit for which two linear models corresponding to two different operating points are available. The second one is a non-linear system consisting of a styrene polymerization reactor. A phenomenological model is provided for this system. Tests results indicate that SPC is more susceptible to measurement noises. However, it is noticed that SPC controller corrects perturbations on set-points faster than MPC. Simulations for the SPC with internal excitation show that the proposed algorithms sufficiently excite the system, in the sense that more precise models are obtained from the re-identification with excited data.
|
399 |
Applications and algorithms for two-stage robust linear optimization / Applications et algorithmes pour l'optimisation linéaire robuste en deux étapesCosta da Silva, Marco Aurelio 13 November 2018 (has links)
Le domaine de recherche de cette thèse est l'optimisation linéaire robuste en deux étapes. Nous sommes intéressés par des algorithmes d'exploration de sa structure et aussi pour ajouter des alternatives afin d'atténuer le conservatisme inhérent à une solution robuste. Nous développons des algorithmes qui incorporent ces alternatives et sont personnalisés pour fonctionner avec des exemples de problèmes à moyenne ou grande échelle. En faisant cela, nous expérimentons une approche holistique du conservatisme en optimisation linéaire robuste et nous rassemblons les dernières avancées dans des domaines tels que l'optimisation robuste basée sur les données, optimisation robuste par distribution et optimisation robuste adaptative. Nous appliquons ces algorithmes dans des applications définies du problème de conception / chargement du réseau, problème de planification, problème combinatoire min-max-min et problème d'affectation de la flotte aérienne. Nous montrons comment les algorithmes développés améliorent les performances par rapport aux implémentations précédentes. / The research scope of this thesis is two-stage robust linear optimization. We are interested in investigating algorithms that can explore its structure and also on adding alternatives to mitigate conservatism inherent to a robust solution. We develop algorithms that incorporate these alternatives and are customized to work with rather medium or large scale instances of problems. By doing this we experiment a holistic approach to conservatism in robust linear optimization and bring together the most recent advances in areas such as data-driven robust optimization, distributionally robust optimization and adaptive robust optimization. We apply these algorithms in defined applications of the network design/loading problem, the scheduling problem, a min-max-min combinatorial problem and the airline fleet assignment problem. We show how the algorithms developed improve performance when compared to previous implementations.
|
400 |
O impacto da capacidade de inteligência analítica de negócios na tomada de decisões na era dos grandes dadosMedeiros, Mauricius Munhoz de 27 February 2018 (has links)
Submitted by JOSIANE SANTOS DE OLIVEIRA (josianeso) on 2018-07-12T16:11:01Z
No. of bitstreams: 1
Mauricius Munhoz de Medeiros_.pdf: 3956183 bytes, checksum: b542e87eedac0be0bf8f79f48709672f (MD5) / Made available in DSpace on 2018-07-12T16:11:01Z (GMT). No. of bitstreams: 1
Mauricius Munhoz de Medeiros_.pdf: 3956183 bytes, checksum: b542e87eedac0be0bf8f79f48709672f (MD5)
Previous issue date: 2018-02-27 / CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível Superior / Este estudo investigou o impacto das capacidades de inteligência analítica de negócios na expansão das capacidades cognitivas gerenciais, orientando a tomada de decisões (com base nos dados), de modo ágil (dinâmico), para a melhoria da gestão do desempenho organizacional. Explicou-se o fenômeno sob a perspectiva teórica das capacidades dinâmicas. Para a definição dos construtos, foram revisados, também, os elementos teóricos a respeito das capacidades de inteligência analítica de negócios e tomada de decisões. Executou-se uma pesquisa de métodos mistos, desenhada em duas etapas. A primeira, exploratória, realizada através de entrevistas com 10 gestores, permitiu o mapeamento dos relacionamentos e a identificação das variáveis, oportunizando o desenvolvimento do instrumento quantitativo. A segunda, confirmatória, realizada através de uma survey com 366 respondentes, cujos resultados foram analisados para validar o instrumento de pesquisa e mensurar o impacto por meio da modelagem de uma equação estrutural, confirmando-se 5 das 7 hipóteses definidas no modelo conceitual. O cerne da discussão está na explicação do impacto das capacidades de inteligência analítica de negócios na tomada decisões, onde os achados evidenciam impacto significativo das capacidades de inteligência analítica gerencial, governança e processamento de grandes dados, e analítica avançada de negócios. A pesquisa contribui para a teoria, por ter explicado as capacidades de inteligência analítica de negócios como capacidades dinâmicas, bem como pelo desenvolvimento e validação de um instrumento para a mensuração integrada dessas capacidades. Para o campo gerencial, o estudo aponta direcionamentos e recomendações ao indicar potencialidades e limitações para o desenvolvimento dessas capacidades. / This study investigated the impact of business analytical intelligence capabilities on the expansion of managerial cognitive capabilities, orienting decision making (based on data) in an agile (dynamic) way, to improve organizational performance management. The phenomenon was explained according to the theoretical perspective of dynamic capabilities. For the definition of the constructs, the theoretical elements regarding business analytical intelligence capabilities and decision making were also reviewed. A mixed-method research was carried out in two stages. The first, which was exploratory, was conducted through interviews with 10 managers and allowed the mapping of relationships and identification of variables, allowing the development of the quantitative instrument. The second, which was confirmatory, was performed through a survey with 366 interviewees, which results were analyzed to validate the research instrument and measure the impact through the modeling of a structural equation, confirming 5 of the 7 hypotheses defined in the conceptual model. The heart of the discussion lies in the explanation of the impact of business analytical intelligence capabilities on decision making, in which the findings evidence significant impact of managerial analytical intelligence capabilities, governance and the processing of big data, and advanced business analytics. This research contributes to the theory by explaining business analytical intelligence capabilities as dynamic capabilities, as well as by developing and validating an instrument for the integrated measurement of these capabilities. For the managerial field, this study points out directions and recommendations when indicating potentialities and limitations for the development of these capabilities.
|
Page generated in 0.2007 seconds