• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 11
  • 11
  • 5
  • 5
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

Interactive Exploration of Text Databases

Huang, Ziqi January 2015 (has links)
No description available.
2

Interactive Data Visualization In Accounting Contexts: Impact On User Attitudes, Information Processing, And Decision Outcomes

Ajayi, Oluwakemi 01 January 2014 (has links)
In 2009, The United States Securities and Exchange Commission (SEC) issued a mandate requiring public companies to provide financial information to the SEC and on their corporate Web sites in an interactive data format using the eXtensible Business Reporting Language (XBRL). This dissertation consists of three separate, but interrelated studies exploring issues related to interactive data visualization in financial reporting contexts. The first study employs theories in information systems (task-technology fit and the technology-performance chain model) and cognitive psychology (cognitive load) to examine the link between characteristics of interactive data visualization and task requirements in a financial analysis context, and the impact of that link on task performance and user attitudes towards interactive data technology use. The second study extends the first by examining the effects of prior interactive data technology use on future choice to use an interactive technology. This study uses the IS continuance model to examine antecedents to continued interactive technology use based on previous assessments of task-technology fit and performance impacts from the first study. The third study employs an elaboration likelihood model (ELM) to understand the interactivity concept and its impact on information processing and belief/attitude formation. This study examines the impact of increasing interactivity on investor perceptions of forecast credibility and on a firm’s attractiveness as a potential investment choice. Overall, these three studies provide insights on factors that impact decision-making in interactive financial reporting contexts, and how characteristics of interactive data visualization impact information processing, user perceptions, and task performance
3

Uma Arquitetura para Aplicações em Processamento de Imagens: um Estudo em Hardware/Software

Viana da Silva, Pablo January 2002 (has links)
Made available in DSpace on 2014-06-12T15:59:28Z (GMT). No. of bitstreams: 2 arquivo5129_1.pdf: 1241668 bytes, checksum: f51ca348f57f1072bfe811018105944c (MD5) license.txt: 1748 bytes, checksum: 8a4605be74aa9ea9d79846c1fba20a33 (MD5) Previous issue date: 2002 / Este trabalho apresenta uma arquitetura Hardware/Software para aplicações em processamento de imagens. O sistema tem como intuito a implementação de um sistema de visão computacional direcionado ao controle de tráfego urbano, o qual visa detectar a presença de veículos em uma área de interesse, dentro do campo visual capturado por uma câmera de vídeo digital instalada em uma via pública. A metodologia de trabalho contempla o desenvolvimento inicial do algoritmo de processamento de imagens digitais através de ferramentas de alto nível de abstração (IDL - Interactive Data Language), explorando as alternativas de implementação com experimentos e técnicas de realce e análise das imagens. Na sequência do fluxo do projeto adotado, a etapa seguinte constitui-se na tradução das funções que compõem o algoritmo desenvolvido em linguagens de médio nível (C/C++), desenvolvendo um código executável que implementa o algoritmo e agregando controle ao usuário do sistema acerca dos ajustes funcionais e resultados obtidos no processamento. Dentro da metodologia de projeto hardware/ software, trechos do algoritmo que representam grande demanda do tempo de processamento, tais como filtragens por convolução foram migradas para uma implementação em hardware do processo, mapeando em um dispositivo de lógica programável a síntese lógica da descrição de hardware (VHDL - Very high speed integrated circuit Hardware Description Language), no intuito de satisfazer os requisitos temporais do sistema
4

Exploiting Human Factors and UI Characteristics for Interactive Data Exploration

Khan, Meraj Ahmed January 2019 (has links)
No description available.
5

Automatic assessment of OLAP exploration quality / Evaluation automatique de la qualité des explorations OLAP

Djedaini, Mahfoud 06 December 2017 (has links)
Avant l’arrivée du Big Data, la quantité de données contenues dans les bases de données était relativement faible et donc plutôt simple à analyser. Dans ce contexte, le principal défi dans ce domaine était d’optimiser le stockage des données, mais aussi et surtout le temps de réponse des Systèmes de Gestion de Bases de Données (SGBD). De nombreux benchmarks, notamment ceux du consortium TPC, ont été mis en place pour permettre l’évaluation des différents systèmes existants dans des conditions similaires. Cependant, l’arrivée de Big Data a complètement changé la situation, avec de plus en plus de données générées de jour en jour. Parallèlement à l’augmentation de la mémoire disponible, nous avons assisté à l’émergence de nouvelles méthodes de stockage basées sur des systèmes distribués tels que le système de fichiers HDFS utilisé notamment dans Hadoop pour couvrir les besoins de stockage technique et le traitement Big Data. L’augmentation du volume de données rend donc leur analyse beaucoup plus difficile. Dans ce contexte, il ne s’agit pas tant de mesurer la vitesse de récupération des données, mais plutôt de produire des séquences de requêtes cohérentes pour identifier rapidement les zones d’intérêt dans les données, ce qui permet d’analyser ces zones plus en profondeur, et d’extraire des informations permettant une prise de décision éclairée. / In a Big Data context, traditional data analysis is becoming more and more tedious. Many approaches have been designed and developed to support analysts in their exploration tasks. However, there is no automatic, unified method for evaluating the quality of support for these different approaches. Current benchmarks focus mainly on the evaluation of systems in terms of temporal, energy or financial performance. In this thesis, we propose a model, based on supervised automatic leaming methods, to evaluate the quality of an OLAP exploration. We use this model to build an evaluation benchmark of exploration support sys.terns, the general principle of which is to allow these systems to generate explorations and then to evaluate them through the explorations they produce.
6

Den uppkopplade enkätundersökningen : En studie av informationsvisualiseringen i Mentimeter / The online survey : A study of information visualization in Mentimeter

Rantzow, Gustav, Prochownik, Natalia January 2014 (has links)
A new phenomenon on the Internet is the online audience response. Mentimeter is a web-based tool where you can create online polls. The users can then cast a vote and the result is shown in real time. We want to test the information visualization in Mentimeter and we base our hypothesis, that a better graph design is possible, on the work of Edward Tufte and Stephen Few. Edward Tufte is an emeritus at Yale University, where he held courses in statistical evidence and information design and has created theories about how visual information should be designed. Stephen Few is a known information designer and he has based a lot of his theories on the work of Tufte. We compare Tufte and Fews design principles against the Mentimeter tool with user testing. We can see that Tufte and Fews theories still stand strong when it comes to the comparisment with Mentimeter and that a redesign of their graphs could raise the quality and the user experience of the tool. But Mentimeter is also a tool that functions the way it is supposed to.
7

Data Fusion Ontology:Enabling a Paradigm Shift from Data Warehousing to Crowdsourcing for Accelerated Pace of Research

Raje, Satyajeet 12 September 2016 (has links)
No description available.
8

Voice Capacity and Data Response Time in Cognitive Radio Networks

Gunawardena, Subodha 09 May 2013 (has links)
The growing interest towards wireless communication services over the recent years has increased the demand for radio spectrum. Inefficient spectrum management together with the scarcity of the radio spectrum is a limiting factor for the development of modern wireless networks. As a solution, the idea of cognitive radio networks (CRNs) is introduced to use licensed spectrum for the benefit of the unlicensed secondary users. However, the preemptive priority of the licensed users results in random resource availabilities at the secondary networks, which makes the quality-of-service (QoS) support challenging. With the increasing demand for elastic/interactive data services (internet based services) and wireless multimedia services, QoS support becomes essential for CRNs. This research investigates the voice and elastic/interactive data service support over CRNs, in terms of their delay requirements. The packet level requirements of the voice service and session level delay requirements of the elastic/interactive data services are studied. In particular, constant-rate and on-off voice traffic capacities are analyzed over CRNs with centralized and distributed network coordination. Some generic channel access schemes are considered as the coordination mechanism, and call admission control algorithms are developed for non-fully-connected CRNs. Advantage of supporting voice traffic flows with different delay requirements in the same network is also discussed. The mean response time of the elastic data traffic over a centralized CRN is studied, considering the shortest processor time with and without preemption and shortest remaining processor time service disciplines, in comparison with the processor sharing service discipline. Effects of the traffic load at the base station and file length (service time requirement) distribution on the mean response time are discussed. Finally, the relationship between the mean response times of interactive and elastic data traffic is studied.
9

Voice Capacity and Data Response Time in Cognitive Radio Networks

Gunawardena, Subodha 09 May 2013 (has links)
The growing interest towards wireless communication services over the recent years has increased the demand for radio spectrum. Inefficient spectrum management together with the scarcity of the radio spectrum is a limiting factor for the development of modern wireless networks. As a solution, the idea of cognitive radio networks (CRNs) is introduced to use licensed spectrum for the benefit of the unlicensed secondary users. However, the preemptive priority of the licensed users results in random resource availabilities at the secondary networks, which makes the quality-of-service (QoS) support challenging. With the increasing demand for elastic/interactive data services (internet based services) and wireless multimedia services, QoS support becomes essential for CRNs. This research investigates the voice and elastic/interactive data service support over CRNs, in terms of their delay requirements. The packet level requirements of the voice service and session level delay requirements of the elastic/interactive data services are studied. In particular, constant-rate and on-off voice traffic capacities are analyzed over CRNs with centralized and distributed network coordination. Some generic channel access schemes are considered as the coordination mechanism, and call admission control algorithms are developed for non-fully-connected CRNs. Advantage of supporting voice traffic flows with different delay requirements in the same network is also discussed. The mean response time of the elastic data traffic over a centralized CRN is studied, considering the shortest processor time with and without preemption and shortest remaining processor time service disciplines, in comparison with the processor sharing service discipline. Effects of the traffic load at the base station and file length (service time requirement) distribution on the mean response time are discussed. Finally, the relationship between the mean response times of interactive and elastic data traffic is studied.
10

Interactive Data Visualization: Applications Used to Illuminate the Environmental Effects of the Syrian War

Karaca, Ece 04 September 2018 (has links)
No description available.

Page generated in 0.0947 seconds