• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 5
  • 2
  • 1
  • 1
  • Tagged with
  • 11
  • 11
  • 5
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • 3
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Throughput Enhancement of TCP over Wireless Links

Gupta, Pawan Kumar 01 1900 (has links)
The congestion control mechanisms of Transmission Control Protocol (TCP) are very effective in providing best effort service in wired networks, where packet losses are mainly due to congestion in the network. In wireless mobile networks, more often than not, loss of packets is because of corruption of data on the wireless link. The TCP sender responds to these losses as if they are due to congestion, by reducing its congestion window, thereby reducing the rate of flow of packets. The reduction in congestion window is a necessity when network is experiencing congestion to avoid congestion collapse but it is not required if packet losses occur due to corruption of data on the wireless link. This unnecessary reduction in congestion window for corruption losses is the main reason for poor throughput of data transfer in wireless networks. The reduction in congestion window for corruption losses can be avoided if TCP can successfully differentiate between packet losses due to congestion and corruption. We suggest enhancements to TCP that, if implemented, will help the TCP receiver in separately identifying corruption losses and congestion losses. The enhancements are suggested over and above standard TCP NewReno and we call this new scheme as "NewRenoEln (NewReno with Explicit Loss Notification)". We suggest that the TCP sender attach a separate checksum for the TCP header with the packet. Since the length of the TCP header is much smaller as compared to the length of the TCP packet, there is a large probability that the TCP receiver will receive the header portion of the TCP packet without error even if the data portion of the packet is corrupted. Once the header information is found to be correct for a corrupted packet, the receiver can generate reliable Explicit 5oss Notification (ELN) for the sender. We derive an expression for the probability of a receiver generating successful Explicit Loss Notification, assuming a generic link layer protocol that is used for data transfer over wireless link. With this analysis, we show that there is large probability that receiver will generate successful ELN for various channel conditions We also suggest modifications to the sender behavior on receiving successful Explicit Loss Notification from the receiver. With these modifications, the TCP sender will recover from corruption losses without any reduction in congestion window. There is also a need to develop a unified analytical approach for the evaluation of TCP performance. We develop an analytical approach for the performance evaluation of NewRenoEln scheme. We compare the throughput results obtained by analytical calculations with results obtained by simulation and find them to be very close to each other. We also compare the performance of the proposed scheme NewRenoEln and the standard NewReno TCP via simulation as well as analytical approach, and find considerable improvement in throughput over wireless links.
2

RGML: A Specification Language that Supports the Characterization of Requirements Generation Processes

Sidky, Ahmed Samy 27 August 2003 (has links)
Despite advancements in requirements generation models, methods and tools, low quality requirements are still being produced. One potential avenue for addressing this problem is to provide the requirements engineer with an interactive environment that leads (or guides) him/her through a structured set of integrated activities that foster "good" quality requirements. While that is our ultimate goal, a necessary first step in developing such an environment is to create a formal specification mechanism for characterizing the structure, process flow and activities inherent to the requirements generation process. In turn, such specifications can serve as a basis for developing an interactive environment supporting requirements engineering. Reflecting the above need, we have developed a markup language, the Requirements Generation Markup Language (RGML), which can be used to characterize a requirements generation process. The RGML can describe process structure, flow of control, and individual activities. Within activities, the RGML supports the characterization of application instantiation, the use of templates and the production of artifacts. The RGML can also describe temporal control within a process as well as conditional expressions that control if and when various activity scenarios will be executed. The language is expressively powerful, yet flexible in its characterization capabilities, and thereby, provides the capability to describe a wide spectrum of different requirements generation processes. / Master of Science
3

Analysis and Evaluation of Methods for Activities in the Expanded Requirements Generation Model (x-RGM)

Lobo, Lester Oscar 30 July 2004 (has links)
In recent years, the requirements engineering community has proposed a number of models for the generation of a well-formulated, complete set of requirements. However, these models are often highly abstract or narrowly focused, providing only pieces of structure and parts of guidance to the requirements generation process. Furthermore, many of the models fail to identify methods that can be employed to achieve the activity objectives. As a consequence of these problems, the requirements engineer lacks the necessary guidance to effectively apply the requirements generation process, and thus, resulting in the production of an inadequate set of requirements. To address these concerns, we propose the expanded Requirements Generation Model (x-RGM), which consists of activities at a more appropriate level of abstraction. This decomposition of the model ensures that the requirements engineer has a clear understanding of the activities involved in the requirements generation process. In addition, the objectives of all the activities defined by the x-RGM are identified and explicitly stated so that no assumptions are made about the goals of the activities involved in the generation of requirements. We also identify sets of methods that can be used during each activity to effectively achieve its objectives. The mapping of methods to activities guides the requirements engineer in selecting the appropriate techniques for a particular activity in the requirements engineering process. Furthermore, we prescribe small subsets of methods for each activity based on commonly used selection criteria such that the chosen criterion is optimized. This list of methods is created with the intention of simplifying the task of choosing methods for the activities defined by the x-RGM that best meet the selection criterion goal / Master of Science
4

SoMMA : a software managed memory architecture for multi-issue processors

Jost, Tiago Trevisan January 2017 (has links)
Processadores embarcados utilizam eficientemente o paralelismo a nível de instrução para atender as necessidades de desempenho e energia em aplicações atuais. Embora a melhoria de performance seja um dos principais objetivos em processadores em geral, ela pode levar a um impacto negativo no consumo de energia, uma restrição crítica para sistemas atuais. Nesta dissertação, apresentamos o SoMMA, uma arquitetura de memória gerenciada por software para processadores embarcados capaz de reduz consumo de energia e energy-delay product (EDP), enquanto ainda aumenta a banda de memória. A solução combina o uso de memórias gerenciadas por software com a cache de dados, de modo a reduzir o consumo de energia e EDP do sistema. SoMMA também melhora a performance do sistema, pois os acessos à memória podem ser realizados em paralelo, sem custo em portas de memória extra na cache de dados. Transformações de código do compilador auxiliam o programador a utilizar a arquitetura proposta. Resultados experimentais mostram que SoMMA é mais eficiente em termos de energia e desempenho tanto a nível de processador quanto a nível do sistema completo. A técnica apresenta speedups de 1.118x e 1.121x, consumindo 11% e 12.8% menos energia quando comparando processadores que utilizam e não utilizam SoMMA. Há ainda redução de até 41.5% em EDP do sistema, sempre mantendo a área dos processadores equivalentes. Por fim, SoMMA também reduz o número de cache misses quando comparado ao processador baseline. / Embedded processors rely on the efficient use of instruction-level parallelism to answer the performance and energy needs of modern applications. Though improving performance is the primary goal for processors in general, it might lead to a negative impact on energy consumption, a particularly critical constraint for current systems. In this dissertation, we present SoMMA, a software-managed memory architecture for embedded multi-issue processors that can reduce energy consumption and energy-delay product (EDP), while still providing an increase in memory bandwidth. We combine the use of software-managed memories (SMM) with the data cache, and leverage the lower energy access cost of SMMs to provide a processor with reduced energy consumption and EDP. SoMMA also provides a better overall performance, as memory accesses can be performed in parallel, with no cost in extra memory ports. Compiler-automated code transformations minimize the programmer’s effort to benefit from the proposed architecture. Our experimental results show that SoMMA is more energy- and performance-efficient not only for the processing cores, but also at full-system level. Comparisons were done using the VEX processor, a VLIW reconfigurable processor. The approach shows average speedups of 1.118x and 1.121x, while consuming up to 11% and 12.8% less energy when comparing two modified processors and their baselines. SoMMA also shows reduction of up to 41.5% on full-system EDP, maintaining the same processor area as baseline processors. Lastly, even with SoMMA halving the data cache size, we still reduce the number of data cache misses in comparison to baselines.
5

SoMMA : a software managed memory architecture for multi-issue processors

Jost, Tiago Trevisan January 2017 (has links)
Processadores embarcados utilizam eficientemente o paralelismo a nível de instrução para atender as necessidades de desempenho e energia em aplicações atuais. Embora a melhoria de performance seja um dos principais objetivos em processadores em geral, ela pode levar a um impacto negativo no consumo de energia, uma restrição crítica para sistemas atuais. Nesta dissertação, apresentamos o SoMMA, uma arquitetura de memória gerenciada por software para processadores embarcados capaz de reduz consumo de energia e energy-delay product (EDP), enquanto ainda aumenta a banda de memória. A solução combina o uso de memórias gerenciadas por software com a cache de dados, de modo a reduzir o consumo de energia e EDP do sistema. SoMMA também melhora a performance do sistema, pois os acessos à memória podem ser realizados em paralelo, sem custo em portas de memória extra na cache de dados. Transformações de código do compilador auxiliam o programador a utilizar a arquitetura proposta. Resultados experimentais mostram que SoMMA é mais eficiente em termos de energia e desempenho tanto a nível de processador quanto a nível do sistema completo. A técnica apresenta speedups de 1.118x e 1.121x, consumindo 11% e 12.8% menos energia quando comparando processadores que utilizam e não utilizam SoMMA. Há ainda redução de até 41.5% em EDP do sistema, sempre mantendo a área dos processadores equivalentes. Por fim, SoMMA também reduz o número de cache misses quando comparado ao processador baseline. / Embedded processors rely on the efficient use of instruction-level parallelism to answer the performance and energy needs of modern applications. Though improving performance is the primary goal for processors in general, it might lead to a negative impact on energy consumption, a particularly critical constraint for current systems. In this dissertation, we present SoMMA, a software-managed memory architecture for embedded multi-issue processors that can reduce energy consumption and energy-delay product (EDP), while still providing an increase in memory bandwidth. We combine the use of software-managed memories (SMM) with the data cache, and leverage the lower energy access cost of SMMs to provide a processor with reduced energy consumption and EDP. SoMMA also provides a better overall performance, as memory accesses can be performed in parallel, with no cost in extra memory ports. Compiler-automated code transformations minimize the programmer’s effort to benefit from the proposed architecture. Our experimental results show that SoMMA is more energy- and performance-efficient not only for the processing cores, but also at full-system level. Comparisons were done using the VEX processor, a VLIW reconfigurable processor. The approach shows average speedups of 1.118x and 1.121x, while consuming up to 11% and 12.8% less energy when comparing two modified processors and their baselines. SoMMA also shows reduction of up to 41.5% on full-system EDP, maintaining the same processor area as baseline processors. Lastly, even with SoMMA halving the data cache size, we still reduce the number of data cache misses in comparison to baselines.
6

SoMMA : a software managed memory architecture for multi-issue processors

Jost, Tiago Trevisan January 2017 (has links)
Processadores embarcados utilizam eficientemente o paralelismo a nível de instrução para atender as necessidades de desempenho e energia em aplicações atuais. Embora a melhoria de performance seja um dos principais objetivos em processadores em geral, ela pode levar a um impacto negativo no consumo de energia, uma restrição crítica para sistemas atuais. Nesta dissertação, apresentamos o SoMMA, uma arquitetura de memória gerenciada por software para processadores embarcados capaz de reduz consumo de energia e energy-delay product (EDP), enquanto ainda aumenta a banda de memória. A solução combina o uso de memórias gerenciadas por software com a cache de dados, de modo a reduzir o consumo de energia e EDP do sistema. SoMMA também melhora a performance do sistema, pois os acessos à memória podem ser realizados em paralelo, sem custo em portas de memória extra na cache de dados. Transformações de código do compilador auxiliam o programador a utilizar a arquitetura proposta. Resultados experimentais mostram que SoMMA é mais eficiente em termos de energia e desempenho tanto a nível de processador quanto a nível do sistema completo. A técnica apresenta speedups de 1.118x e 1.121x, consumindo 11% e 12.8% menos energia quando comparando processadores que utilizam e não utilizam SoMMA. Há ainda redução de até 41.5% em EDP do sistema, sempre mantendo a área dos processadores equivalentes. Por fim, SoMMA também reduz o número de cache misses quando comparado ao processador baseline. / Embedded processors rely on the efficient use of instruction-level parallelism to answer the performance and energy needs of modern applications. Though improving performance is the primary goal for processors in general, it might lead to a negative impact on energy consumption, a particularly critical constraint for current systems. In this dissertation, we present SoMMA, a software-managed memory architecture for embedded multi-issue processors that can reduce energy consumption and energy-delay product (EDP), while still providing an increase in memory bandwidth. We combine the use of software-managed memories (SMM) with the data cache, and leverage the lower energy access cost of SMMs to provide a processor with reduced energy consumption and EDP. SoMMA also provides a better overall performance, as memory accesses can be performed in parallel, with no cost in extra memory ports. Compiler-automated code transformations minimize the programmer’s effort to benefit from the proposed architecture. Our experimental results show that SoMMA is more energy- and performance-efficient not only for the processing cores, but also at full-system level. Comparisons were done using the VEX processor, a VLIW reconfigurable processor. The approach shows average speedups of 1.118x and 1.121x, while consuming up to 11% and 12.8% less energy when comparing two modified processors and their baselines. SoMMA also shows reduction of up to 41.5% on full-system EDP, maintaining the same processor area as baseline processors. Lastly, even with SoMMA halving the data cache size, we still reduce the number of data cache misses in comparison to baselines.
7

Effects of bedrock groundwater dynamics on hydro-biogeochemical processes in granitic headwater catchments / 基岩地下水動態が花崗岩山地源流域の水文・生物地球化学過程に与える諸影響

Iwasaki, Kenta 26 March 2018 (has links)
京都大学 / 0048 / 新制・論文博士 / 博士(農学) / 乙第13180号 / 論農博第2859号 / 新制||農||1061(附属図書館) / 学位論文||H30||N5102(農学部図書室) / (主査)教授 小杉 緑子, 教授 北山 兼弘, 教授 舟川 晋也 / 学位規則第4条第2項該当 / Doctor of Agricultural Science / Kyoto University / DFAM
8

Construção de um índice de cointegração e utilização do modelo de regimes Markovianos de conversão para a identificação de risco e retorno: evidência a partir de ações na Bolsa de Valores de São Paulo

Almeida, Patrícia Marília Ricomini e 09 March 2006 (has links)
Made available in DSpace on 2016-03-15T19:25:32Z (GMT). No. of bitstreams: 1 Patricia Marilia Ricomini e Almeida.pdf: 585196 bytes, checksum: d95885c7a4db627bc6882b2064a1efeb (MD5) Previous issue date: 2006-03-09 / Fundo Mackenzie de Pesquisa / One of the most popular subjects in finance is about the search and the learning of the securities return generation process and originate with the publication of Bachelier s thesis, in 1900. In 1978, Jensen affirmed that, any strategy of business, that produces economic profits in a consistent way, discounted the risk, for a sufficient long period, observing the transaction costs, consist in evidence against market efficiency. However, occurs that empirical evidences, mainly as from 60 s decade, have verified a succession of events, that originate production of literary work in finance: conglomerate of volatility, no normality of returns, negative asymmetry, excess of kurtosis and stochastic volatility. As result of these verifications, theories arose, especially of economic nature, about the characteristic nonlinear of the data, as rational speculative bubble. This paper examines the performance of a general dynamic equity indexing strategy based on cointegration, from a market efficiency perspective, observing the different levels of risk and regimes. The identification of these regimes auto regressive in the process of generating returns in the Brazilian Market, especially in Bovespa, for the Plano Real period (January of 1995 to September of 2004), will be elaborated trough a Markov Switching Model. With this model, is possible to identify the nonlinear structure of the data and it is relation to the conditional mean and conditional variance. As result the dynamics of the data generation process, the returns can be described as function of the growth cycle ("bull markets") and decrease ("bear markets"). / Um dos mais populares assuntos em finanças trata da pesquisa e estudo do processo de geração de retornos de títulos, tendo sua origem com a publicação da tese de Bachelier, em 1900. Em 1978, Jensen afirmou que, qualquer estratégia de negócio, que produza de forma consistente ganho econômico, já descontado o risco, por um período suficientemente longo, considerando os custos de transação, constitui-se em uma evidência contra eficiência de mercado. A eficiência de mercado, portanto, pode ser traduzida para a hipótese de que o valor esperado do excesso da taxa de retorno é, na média, igual a zero, quando se leva em consideração uma medida de probabilidade que desconta o prêmio pelo risco, dado um conjunto de informações (históricas, públicas ou privadas). Todavia, ocorre que as evidências empíricas, principalmente a partir da década de sessenta, têm constatado uma série de fatos, que deram origem a uma vasta literatura em finanças: conglomerados de volatilidade, não normalidade dos retornos, assimetria negativa, excesso de curtose, volatilidade estocástica, auto- regressividade dos retornos e da volatilidade, anomalias de mercado relacionadas com a sazonalidade ou com o funcionamento dos mercados, anomalias de mercado relacionadas ao tamanho da empresa e a sua estrutura de capital, processo de reversão para o retorno médio e valores extremos. Em função dessas constatações, surgiram teorias, especialmente de natureza econômica, sobre a característica não linear dos dados, tais como: modismos, manias e pânicos e bolhas especulativas racionais. Um dos objetivos do presente estudo consiste em elaborar uma estratégia ativa baseada na construção de um Índice de Cointegração, considerando-se os diferentes níveis de riscos e de regimes auto regressivo. A identificação desses regimes no processo de geração de retornos no mercado brasileiro de ações na BOVESPA, para o período pós Plano Real (janeiro de 1995 a setembro de 2004) será elaborado através do Modelo de Regimes de Conversão de Markov. A utilização desse modelo de regimes permite identificar a estrutura não linear dos dados seja em relação à média condicional, seja em relação à variância condicional. Como resultado, a dinâmica do processo de geração poderá ser função de ciclos de crescimento persistente ( bull markets ) e de não crescimento ( bear markets ).
9

O processo de geração de inovação baseado nas práticas de gestão de conhecimento em empresas de base tecnológica / The innovation generation process based on the knowledge management practices in technology-based companies

Botton, Juliana Santi 04 December 2014 (has links)
Made available in DSpace on 2017-07-10T16:32:24Z (GMT). No. of bitstreams: 1 MESTRADO_Juliana_Santi_Botton.pdf: 5642965 bytes, checksum: c3f49846987fe5a712c5e18e6f743a6f (MD5) Previous issue date: 2014-12-04 / The proposition of this study is based on the Innovation Generation Process (IGP) like basis to technology-based companies (TBC) by the knowledge management practices (KMP). It was understood that, innovation is a knowledge product, consequently, it was proposed to elaborate a theoretical and empirical construction in search of the innovation generation based on the knowledge management (KM), understanding like better way the use of the KMP. Furthermore, the choose of the study s universe, the TBCs, was based in the perspective that they have like synonymous of their work, the innovation. Therefore, this search was structured by the question: How the KMP and their application on technology-based companies, can be organized to allow the development of an Innovation Generation Process? This kind, it was obtained the general objective of this study: Developing a construct of the Innovation Generation Process, based on the Knowledge Management Practices and their application in Technology-based Companies and a) elaborating, by bibliographic study, possible levels and stages in the IGP; b) Listing, by bibliographic study, the KMP that were validates yet, that could be allocated on the IGP; c) identifying on the TBCs, the KMP most used, those whose importance is greater, and, whose would be is the level of each one in the IGP, excluding the rejects and validating other. Developed the theory part of the Construct of IGP, it was applied the questionnaire contain 40 practices and three questions, about Importance, about the frequency and, about the classify of the practices relation to the nine situational variables of IGP. To obtain a return of 47 questionnaires. It was used the test Alpha of Cronbach to confirm the reliability of data to the search. It was applied, the descriptive statistic, compromising like more important practices: Brainstorming or generation of ideas, development and apply of new process and, the corporative University; like more frequents practices: corporative e-mails, enterprise blog and, the social network. In addition, the test of Correlation of Pearson indicated relation of 64, 3% between importance and frequency, and Simple Linear Regression (respecting the regression assumptions) indicated that 42, 3% of the variations at importance are explicated by the variations at frequency. Finally, in relation on the incorporation of the practices to the IGP, they were observed the mean and standard deviation of each practice. Some propositions of analysis were done: a) the Variations coefficient; b) the visual analysis; c) the amplitude and; d) the Factorials Analysis. The first of them was refuted because it didn t supply the proposed objectives; the visual analysis indicated it s applied like auxiliary; the Amplitude Analysis indicated like the best technic to the result obtain, enabling a division with five classifies among the practices: The Primary Practices; The Central Practices; The Superior Practices; The Dispersed Primary Practices and; The Dispersed Superior Practices. The Factorial Analysis corroborated with the Amplitude, dividing the practices on Primary, central and superior practices. In summary, the practices were allocated in chart, showing which practice assist each phase of IGP. The results allow the recommendation the construct another sectors. / A proposta deste estudo está baseada no Processo de Geração de Inovação (PGI) como alicerce para Empresas de Base Tecnológica (EBT s) a partir das Práticas de Gestão do Conhecimento (PGC). Compreendeu-se que, a inovação é fruto do conhecimento, consequentemente, propôs-se elaborar uma construção teórico-empírica em busca da geração da inovação baseada na Gestão do Conhecimento (GC), entendendo como melhor caminho o uso das PGC. Além disso, a escolha do universo de estudo, as EBT s, foi baseada na perspectiva de que elas têm como sinônimo de seu trabalho, a inovação. Assim, esta pesquisa foi baseada no objetivo geral do estudo de Desenvolver um constructo do Processo de Geração de Inovação - PGI, baseado nas Práticas de Gestão do Conhecimento sob a perspectiva de empresas de base tecnológica do segmento de desenvolvimento de software ou web. a) Elaborar, a partir de estudo bibliográfico, possíveis níveis e estágios dentro do processo de geração de inovação; b) Listar, a partir de estudo bibliográfico, as Práticas de Gestão do Conhecimento já validadas, que possam ser aplicadas a um processo de Geração da Inovação; c) Identificar nas empresas de base tecnológica, as PGC mais utilizadas, as de maior importância, e em qual, ou quais Estágios do Processo de Geração de Inovação se encontra cada prática. Desenvolvida a parte teórica do Constructo do PGI, foi aplicado questionário contendo 40 Práticas e três perguntas, sobre a importância, sobre a frequência e sobre a classificação das práticas em relação a nove Variáveis Situacionais referentes ao PGI. Obtendo um retorno de 47 questionários, pode-se efetuar o teste Alfa de Cronbach, que confirmou a confiabilidade dos dados para a pesquisa. Aplicou-se a Estatística Descritiva, compreendendo como práticas mais importantes o Braintorming ou geração de ideias, o Desenvolvimento e aplicação de novo processo e, a Universidade corporativa; como práticas mais frequentes, os resultados indicaram os E-mails corporativos, o Blog empresarial e, as redes informais (rádio corredor). Em adição, o teste de Correlação de Pearson indicou relação de 64,3% entre as variáveis Importância e Frequência, e a Regressão Linear Simples (respeitando os pressupostos de regressão) indicaram que 42,3% das variações na Importância são explicadas pelas variações na Frequência. Finalmente, em relação à incorporação das práticas ao PGI, foram observadas a média e o Desvio Padrão de cada prática. Algumas proposições da análise foram feitas: a) Do Coeficiente de Variação; b) Da Análise Visual; c) da Amplitude e d) da Análise Fatorial. A Primeira delas foi refutada por não suprir aos objetivos propostos; a Análise Visual sugeriu aplicabilidade como técnica auxiliar; a Análise da Amplitude indicou a melhor técnica para a obtenção dos resultados, possibilitando a divisão de cinco Classificações entre as práticas: As Práticas Primárias; As Práticas Centrais; As Práticas Superiores; As Práticas Dispersas Primárias e; As Práticas Dispersas Superiores. A Análise Fatorial corroborou com a Amplitude, dividindo as práticas entre Práticas Primárias, Centrais e Superiores. Em síntese, as práticas foram alocadas em Painel, indicando qual prática auxilia em cada Estágio do PGI. Os resultados permitem a recomendação do Constructo em outros setores.
10

Conception et génération dynamique de tableaux de bord d’apprentissage contextuels / Design and dynamic generation of contextual Learning Analytics dashboards

Dabbebi, Ines 11 October 2019 (has links)
Ce travail s’inscrit dans une problématique générale de l’analytique de l’apprentissage numérique et particulièrement dans le contexte du projet ANR HUBBLE, un observatoire national permettant le dépôt de processus d’analyse de haut niveau. Nous nous intéressons principalement à la communication des données d’analyse aux utilisateurs en mettant à leur disposition des tableaux de bord d'apprentissage (TBA). Notre problématique porte sur l’identification de structures génériques dans le but de générer dynamiquement des TBA sur mesure. Ces structures doivent être à la fois génériques et adaptables aux besoins d’utilisateurs. Les travaux existants proposent le plus souvent des TBA trop généraux ou développés de manière adhoc. Au travers du projet HUBBLE, nous souhaitons exploiter les décisions des utilisateurs pour générer dynamiquement des TBA. Nous nous sommes intéressés au domaine de l’informatique décisionnelle en raison de la place des tableaux de bord dans leur processus. La prise de décision exige une compréhension explicite des besoins des utilisateurs. C'est pourquoi nous avons adopté une approche de conception centrée sur l'utilisateur dans le but de lui fournir des TBA adaptés. Nous proposons aussi un processus de capture des besoins qui a permis l’élaboration de nos modèles (indicateur, moyens de visualisation, utilisateur, …). Ces derniers sont utilisés par un processus de génération implémenté dans un prototype de générateur dynamique. Nous avons procédé à une phase d'évaluation itérative dont l’objectif est d'affiner nos modèles et de valider l'efficacité de notre processus de génération ainsi que de démontrer l'impact de la décision sur la génération des TBA. / This work is part of a broader issue of Learning Analytics (LA). It is particularly carried out within the context of the HUBBLE project, a national observatory for the design and sharing of data analysis processes. We are interested in communicating data analysis results to users by providing LA dashboards (LAD). Our main issue is the identification of generic LAD structures in order to generate dynamically tailored LAD. These structures must be generic to ensure their reuse, and adaptable to users’ needs. Existing works proposed LAD which remains too general or developed in an adhoc way. According to the HUBBLE project, we want to use identified decisions of end-users to generate dynamically our LAD. We were interested in the business intelligence area because of the place of dashboards in the decision-making process. Decision-making requires an explicit understanding of user needs. That's why we have adopted a user-centered design (UCD) approach to generate adapted LAD. We propose a new process for capturing end-users’ needs, in order to elaborate some models (Indicator, visualization means, user, pattern, …). These models are used by a generation process implemented in a LAD dynamic generator prototype. We conducted an iterative evaluation phase. The objective is to refine our models and validate the efficiency of our generation process. The second iteration demonstrates the impact of the decision on the LAD generation. Thus, we can confirm that the decision is considered as a central element for the generation of LADs.

Page generated in 0.0913 seconds