Spelling suggestions: "subject:"forminformation"" "subject:"formatetinformation""
91 |
Assisted Viewpoint Interaction for 3D VisualizationHughes, Stephen Benjamin 30 September 2005 (has links)
Many three-dimensional visualizations are characterized by the use of a mobile viewpoint that offers multiple perspectives on a set of visual information. To effectively control the viewpoint, the viewer must simultaneously manage the cognitive tasks of understanding the layout of the environment, and knowing where to look to find relevant information, along with mastering the physical interaction required to position the viewpoint in meaningful locations. Numerous systems attempt to address these problems by catering to two extremes: simplified controls or direct presentation. This research attempts to promote hybrid interfaces that offer a supportive, yet unscripted exploration of a virtual environment.
Attentive navigation is a specific technique designed to actively redirect viewers attention while accommodating their independence. User-evaluation shows that this technique effectively facilitates several visualization tasks including landmark recognition, survey knowledge acquisition, and search sensitivity. Unfortunately, it also proves to be excessively intrusive, leading viewers to occasionally struggle for control of the viewpoint. Additional design iterations suggest that formalized coordination protocols between the viewer and the automation can mute the shortcomings and enhance the effectiveness of the initial attentive navigation design.
The implications of this research generalize to inform the broader requirements for Human-Automation interaction through the visual channel. Potential applications span a number of fields, including visual representations of abstract information, 3D modeling, virtual environments, and teleoperation experiences.
|
92 |
Recall of Landmarks in Information SpaceSorrows, Molly 30 September 2005 (has links)
Research on navigation and landmarks in physical space, information space and virtual reality environments indicates that landmarks play an important role in all types of navigation. This dissertation tackles the problem of defining and evaluating the characteristics of landmarks in information space. This work validates a recent theory that three types of characteristics, structural, visual and semantic, are important for effective landmarks.
This dissertation applies concepts and techniques from the extensive body of research on physical space navigation to the investigation of landmarks on a web site in the World Wide Web. Data was collected in two experiments to examine characteristics of web pages on the University of Pittsburgh web site. In addition, objective measurements were made to examine the characteristics of web pages with relation to the experimental data. The two experiments examined subjects knowledge, use and evaluation of web pages. This research is unique in research on web navigation in its use of experimental techniques that ask subjects to recall from memory possible navigation paths and URLs.
Two measures of landmark quality were used to examine the characteristics of landmarks; one, an algorithm that incorporated objective measures of the structural, visual and semantic characteristics of each web page, and the second, a measure based on the experimental data regarding subjects knowledge and evaluation of the page.
Analysis of this data from a web space confirms the tri-partite theory of characteristics of landmarks. Significant positive correlations were found between the objective and subjective landmark measures, indicating that this work is an important step toward the ability to objectively evaluate web pages and web site design in terms of landmarks. This dissertation further suggests that researchers can utilize the characteristics to analyze and improve the design of information spaces, leading to more effective navigation.
|
93 |
AUTOMATED FEATURE EXTRACTION AND CONTENT-BASED RETRIEVAL OF PATHOLOGY MICROSCOPIC IMAGES USING K-MEANS CLUSTERING AND CODE RUN-LENGTH PROBABILITY DISTRIBUTIONZheng, Lei 31 January 2006 (has links)
The dissertation starts with an extensive literature survey on the current issues in content-based image retrieval (CBIR) research, the state-of-the-art theories, methodologies, and implementations, covering topics such as general information retrieval theories, imaging, image feature identification and extraction, feature indexing and multimedia database search, user-system interaction, relevance feedback, and performance evaluation. A general CBIR framework has been proposed with three layers: image document space, feature space, and concept space. The framework emphasizes that while the projection from the image document space to the feature space is algorithmic and unrestricted, the connection between the feature space and the concept space is based on statistics instead of semantics. The scheme favors image features that do not rely on excessive assumptions about image content
As an attempt to design a new CBIR methodology following the above framework, k-means clustering color quantization is applied to pathology microscopic images, followed by code run-length probability distribution feature extraction. Kulback-Liebler divergence is used as distance measure for feature comparison. For content-based retrieval, the distance between two images is defined as a function of all individual features. The process is highly automated and the system is capable of working effectively across different tissues without human interference. Possible improvements and future directions have been discussed.
|
94 |
Användning av information i ett mellanlager inom fordonsindustri. : En fallstudie av Vici Industri AB i Skövde.Batangouna, Steve Jehu January 2012 (has links)
Att synliggöra information på alla stadier i processen från början till slut på företaget gör det möjligt att upptäckta felen i ett tidigt skede av processen och samtidigt att informationen korrigeras snabbt och effektivt. Det är därför Pearlson & Saunders (2010) menar att informationssystem bygger på tre grundpelare: teknik, människor och processer. Utan samverkan mellan dessa pelare, är det svårt eller omöjligt att lyckas att implementera någon form av databas. Dessa med tanken på att databasen syftar till att samordna och effektivisera flödet av både material och information på företaget. Den gemensamma databasen skapar dessutom förutsättningar för en komplett, tid aktuell och korrekt information som delas inom företaget mellan olika avdelningar. Dessa i sin tur skapar förutsättningar för effektiv produktion som leder till lönsamhet. Flödet av material och information måste gå hand i hand med varandra för att effektivisera produktionen. Det är därför, Mattsson (2010) menar att ett effektivare materialflöde utnyttja resurserna effektivt skapa villkor för att tillfredsställa kundernas krav på ett bra sätt. För att uppnå detta mål, kräver det för företaget för att skapa förutsättningar för att rapportera fel i de tidiga stadierna av processen så snart som möjligt. Detta undviker onödigt arbete att kunderna inte är beredda att betala för. Det är därför Alter (2006) påpekar att villkoren för en ökad effektivitet i verksamheten endast kan nås om det finns en tydlig koppling mellan information, teknik, användare och informationsverksamhet ledning. / To make visible information at all stages of the process from beginning to end at the company allows the discovery of errors at the early stage of process, and at the same time allows the fact that information been addressed in a timely and efficient manner. It is therefore Pearlson & Saunders (2010) argue that information systems are based on three fundamental pillars: technology, people and process. Without the interaction of these pillars, it is difficult or impossible to succeed in implementing any form of database. The flow of materials and information must go hand in hand with each other to improve production efficiency. It is, therefore, Mattsson (2010) argues that an efficient material flow utilizing resources efficiently create condition of satisfying customer’s demand in good ways. To achieve this goal, it requires for the company to create conditions to report errors in the early stages of the process as soon as possible. This avoids unnecessary work that customers are not prepared to pay for. That's why Alter (2006) points out that the conditions for an increase in efficiency in business can only be reached if there is a clear link between information, technology, users and information management activities. Today IT, Information technology is one of effective tools which enable implementation of the common database within the same organization. The database aims to coordinate and streamline the flow of both materials and information at the company. The common database creates the potential for a complete, time-date and accurate information shared within the company among various departments. These in turn create conditions for the efficient production that leads to profitability.
|
95 |
Evaluation of Information Quality in Business Intelligence as a key success factor for using Decision Support SystemMehdi Hadi, Abidalsajad-Kamel, Lin, Yin-tsu January 2011 (has links)
No description available.
|
96 |
Internet QoS market Analysis with peering and usage-sensitive pricing: A game theoretic and simulation approachShin, SeungJae 12 June 2003 (has links)
One of the major areas for research and investment related to the Internet is the provision of quality of service (QoS). We remain confident that in the not-to-distance future, QoS will be introduced not only in private networks but in the whole Internet. QoS will bring some new features into the Internet market: (1) vertical product differentiation with BE and QoS, (2) usage-sensitive pricing with metering. In this dissertation, the equilibrium outcomes are analyzed when two rural Internet Access Providers (IAPs) interact with several business and technical strategies such as technology (BE or QoS), pricing scheme (flat-rate pricing or two-part tariff), interconnection (transit or peering) and investment in network capacity. To determine the equilibria, we construct a duopoly game model based on Cournot theory. We calibrate this model to data found in real markets. In this model, we study ten cases with a combination of strategic choices of two IAPs. We use two demand functions: one based on uniform distribution and the other based on empirical distribution which comes from the U.S. General Accounting Office (U.S. GAO) survey for Internet usage. We use a two-stage RNG (Random Number Generator) simulation and a linear regression for the latter. If we consider IAPs with the BE and the flat rate pricing as the current Internet, the equilibrium points of each case in this model suggest a progressive market equilibrium path to the future Internet market. Based on the equilibrium analysis of the game model, we conclude that (1) {QoS, two-part tariff, transit/peering} or {QoS, flat-rate pricing, peering} will be a plausible situation in the future Internet access market, (2) network capacity will still be an important strategy to determine market equilibrium in the future as well as in the current, (3) BE will take a considerable market share in the QoS Internet, and (4) peering arrangements in the QoS Internet will provide a higher social welfare than transit. These implications from the game analysis present an analytical framework for the future Internet policy.
|
97 |
802.11 Markov Channel ModellingArauz, Julio Nicolas 28 January 2005 (has links)
In order to understand the behavior of upper layer protocols and to design or fine tune their parameters over wireless networks, it is common to assume that the underlying channel is a flat Rayleigh fading channel. Such channels are commonly modeled as finite state Markov chains. Recently, hidden Markov models have also been employed to characterize these channels. Although Markov models have been widely used to study the performance of communications protocols at the link and transport layers, no validation of their accuracy has been performed against experimental data. These models are not applicable to frequency selective fading channels. Moreover, there are no good models to consider the effects of path loss (average received SNR), the packet size, and transmission rate variations which are significant in IEEE 802.11 wireless local area networks.
This research performs validation of Markov models with experimental data and discusses the limitations of the process. In this dissertation, we present different models that have been proposed along with their validity analysis. We use the experimental data with stochastic modeling approaches to characterize the frame losses in IEEE 802.11 wireless LANs. We also characterize the important factor of current wireless LAN technology, the transmission rate variations. New guidelines for the construction of Markov and hidden Markov models for wireless LAN channels are developed and presented along the necessary data to implement them in performance studies. Furthermore we also evaluate the validity of using Markovian models to understand the effects on upper layer protocols such as TCP.
|
98 |
ENERGY CONSERVATION FOR WIRELESS AD HOC ROUTINGHou, Xiaobing 26 July 2006 (has links)
Self-configuring wireless ad hoc networks have attracted considerable attention in the last few years due to their valuable civil and military applications. One aspect of such networks that has been studied insufficiently is the energy efficiency. Energy efficiency is crucial to prolong the network lifetime and thus make the network more survivable.
Nodes in wireless ad hoc networks are most likely to be driven by battery and hence operate on an extremely frugal energy budget. Conventional ad hoc routing protocols are focused on handling the mobility instead of energy efficiency. Energy efficient routing strategies proposed in literature either do not take advantage of sleep modes to conserve energy more efficiently, or incur much overhead in terms of control message and computing complexity to schedule sleep modes and thus are not scalable.
In this dissertation, a novel strategy is proposed to manage the sleep of the nodes in the network so that energy can be conserved and network connectivity can be kept. The novelty of the strategy is its extreme simplicity. The idea is derived from the results of the percolation theory, typically called gossiping. Gossiping is a convenient and effective approach and has been successfully applied to several areas of the networking. In the proposed work, we will develop
a sleep management protocol from gossiping for both static and mobile wireless ad hoc networks. Then the protocol will be extended to the asynchronous network, where nodes manage their own states independently. Analysis and simulations will be conducted to show the
correctness, effectiveness and efficiency of the proposed work. The comparison between analytical and simulation results will justify them for each other. We will investigate the most important performance aspects concerning the proposed strategy, including the effect of
parameter tuning and the impacts of routing protocols. Furthermore, multiple extensions will be developed to improve the performance and make the proposed strategy apply to different network scenarios.
|
99 |
The Tuskegee Syphilis Study: Access and Control over Controversial RecordsWhorley, Tywanna 06 October 2006 (has links)
As the nations archives, the National Archives and Records Administration (NARA) preserves
and provides access to records that document how our government conducts business on behalf
of the American peoplepast and present. For the American citizen, NARA provides a form of
accountability through the records within its custody which affect the nations collective
memory. A plethora of these records, however, contain evidence of the federal governments
misconduct in episodes in American history which affected public trust. The Tuskegee Syphilis
Study records are a prime example of records within the custody of NARA that continue to have
a lasting affect on public trust in the federal government. Even though NARA disclosed administrative records that document the governments role in the study, the Tuskegee Syphilis Study records continue to challenge the institution on a variety of archival issues such as access, privacy, collective memory, and accountability. Through historical case study methodology, this study examines the National Archives and Records Administrations administrative role in maintaining and providing access to the Tuskegee Syphilis Study records, especially the restricted information. The effect of the changing social context on NARAs recordkeeping practices of the Tuskegee Syphilis Study records is also explored.
|
100 |
Integrating Protein Data Resources through Semantic Web ServicesLiu, Xiong 30 January 2007 (has links)
Understanding the function of every protein is one major objective of bioinformatics. Currently, a large amount of information (e.g., sequence, structure and dynamics) is being produced by experiments and predictions that are associated with protein function. Integrating these diverse data about protein sequence, structure, dynamics and other protein features allows further exploration and establishment of the relationships between protein sequence, structure, dynamics and function, and thereby controlling the function of target proteins. However, information integration in protein data resources faces challenges at technology level for interfacing heterogeneous data formats and standards and at application level for semantic interpretation of dissimilar data and queries.
In this research, a semantic web services infrastructure, called Web Services for Protein data resources (WSP), for flexible and user-oriented integration of protein data resources, is proposed. This infrastructure includes a method for modeling protein web services, a service publication algorithm, an efficient service discovery (matching) algorithm, and an optimal service chaining algorithm. Rather than relying on syntactic matching, the matching algorithm discovers services based on their similarity to the requested service. Therefore, users can locate services that semantically match their data requirements even if they are syntactically distinctive. Furthermore, WSP supports a workflow-based approach for service integration. The chaining algorithm is used to select and chain services, based on the criteria of service accuracy and data interoperability. The algorithm generates a web services workflow which automatically integrates the results from individual services.
A number of experiments are conducted to evaluate the performance of the matching algorithm. The results reveal that the algorithm can discover services with reasonable performance. Also, a composite service, which integrates protein dynamics and conservation, is experimented using the WSP infrastructure.
|
Page generated in 0.1539 seconds