• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 61845
  • 6049
  • 5658
  • 3724
  • 3450
  • 2322
  • 2322
  • 2322
  • 2322
  • 2322
  • 2309
  • 1227
  • 1148
  • 643
  • 535
  • Tagged with
  • 103770
  • 45499
  • 28943
  • 20559
  • 17996
  • 12472
  • 10995
  • 10853
  • 9121
  • 8524
  • 7166
  • 6403
  • 6252
  • 6194
  • 6064
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
181

End-user specification of interactive displays.

Mohamed, Shamim P. January 1993 (has links)
Presenting data graphically can often increase its understandability--well-designed graphics can be more effective than a tabular display of numbers. It is much easier to get an understanding of the relationships and groupings in data by looking at a pictorial representation than at raw numbers. Most visualization systems to date, however, have allowed users to only choose from a small number of pre-defined display methods. This does not allow the easy development of new and innovative display techniques. These systems also present a static display--users cannot interact with and explore the data. More innovative displays, and the systems that implement them, tend to be extremely specialised, and closely associated with an underlying application. We propose techniques and a system where the user can specify most kinds of displays. It provides facilities to integrate user-input devices into the display, so that users can interact and experiment with the data. This encourages an exploratory approach to data understanding. Most users of such systems have the sophistication to use advanced techniques, but conventional programming languages are too hard to learn just for occasional use. It is well known that direct manipulation is a powerful technique for novice users; systems that use it are much easier to learn and remember for occasional use. We provide a system that uses these techniques to provide a visualization tool. Extensions to the WYSIWYG (What You See Is What You Get) metaphor are provided to handle its shortcomings, the difficulty of specifying deferred actions and abstract objects. In the data graphics domain, the main drawbacks of WYSIWYG systems are the difficulty of allowing a variable number of data items, and specifying conditional structures. This system also encourages re-use and sharing of commonly used display idioms. Pre-existing displays can be easily incorporated into new displays, and also modified to suit the users' specific needs. This allows novices and unsophisticated users to modify and effectively use display techniques that advanced users have designed.
182

Heterogeneity and Density Aware Design of Computing Systems

Arora, Manish 03 August 2018 (has links)
<p> The number of programmable cores that are available in systems continues to increase with advances in device scaling, integration, and iterative improvements. Today, systems are not just integrating more cores, but also integrating a variety of different types of processing cores, resulting in dense heterogeneous systems. However, important questions remain about the design methodology for dense heterogeneous systems. This thesis seeks to address these questions. </p><p> One typical methodology for heterogeneous system design is to comprise systems by using parts of homogeneous systems. Another commonly used technique to enable density is replication. However, these design methodologies are &ldquo;heterogeneous system oblivious&rdquo; and &ldquo;density oblivious&rdquo;. The components of the system are not aware or optimized for the heterogeneous system they would become a part of. Nor are they aware of the existence of other replicated components. This thesis shows that &ldquo;heterogeneous system oblivious&rdquo; and &ldquo;density oblivious&rdquo; design methodologies result in inefficient systems. This thesis proposes heterogeneity and density aware approaches to designing dense heterogeneous architectures.</p><p>
183

Supervised and Unsupervised Learning for Semantics Distillation in Multimedia Processing

Liu, Yu 19 October 2018 (has links)
<p> In linguistic, "semantics" stands for the intended meaning in natural language, such as in words, phrases and sentences. In this dissertation, the concept "semantics" is defined more generally: the intended meaning of information in all multimedia forms. The multimedia forms include language domain text, as well as vision domain stationary images and dynamic videos. Specifically, semantics in multimedia are the media content of cognitive information, knowledge and idea that can be represented in text, images and video clips. A narrative story, for example, can be semantics summary of a novel book, or semantics summary of the movie originated from that book. Thus, semantic is a high level abstract knowledge that is independent from multimedia forms. </p><p> Indeed, the same amount of semantics can be represented either redundantly or concisely, due to diversified levels of expression ability of multimedia. The process of a redundantly represented semantics evolving into a concisely represented one is called "semantic distillation". And this evolving process can happen either in between different multimedia forms, or within the same form. </p><p> The booming growth of unorganized and unfiltered information is bringing to people an unwanted issue, information overload, where techniques of semantic distillation are in high demand. However, as opportunities always be side with challenges, machine learning and Artificial Intelligence (AI) today become far more advanced than that in the past, and provide with us powerful tools and techniques. Large varieties of learning methods has made countless of impossible tasks come to reality. Thus in this dissertation, we take advantages of machine learning techniques, with both supervised learning and unsupervised learning, to empower the solving of semantics distillation problems. </p><p> Despite the promising future and powerful machine learning techniques, the heterogeneous forms of multimedia involving many domains still impose challenges to semantics distillation approaches. A major challenge is the definition of "semantics" and the related processing techniques can be entirely different from one problem to another. Varying types of multimedia resources can introduce varying kinds of domain-specific limitations and constraints, where the obtaining of semantics also becomes domain-specific. Therefore, in this dissertation, with text language and vision as the two major domains, we approach four problems of all combinations of the two domains: <b>&bull; Language to Vision Domain:</b> In this study, <i>Presentation Storytelling </i> is formulated as a problem that suggesting the most appropriate images from online sources for storytelling purpose given a text query. Particularly, we approach the problem with a two-step semantics processing method, where the semantics from a simple query is first expanded to a diverse semantic graph, and then distilled from a large number of searched web photos to a few representative ones. This two-step method is empowered by Conditional Random Field (CRF) model, and learned in supervised manner with human-labeled examples. <b>&bull; Vision to Language Domain:</b> The second study, <i> Visual Storytelling</i>, formulates a problem of generating a coherent paragraph from a photo stream. Different from presentation storytelling, visual storytelling goes in opposite way: the semantics extracted from a handful photos are distilled into text. In this dissertation, we address this problem by revealing the semantics relationships in visual domain, and distilled into language domain with a new designed Bidirectional Attention Recurrent Neural Network (BARNN) model. Particularly, an attention model is embedded to the RNN so that the coherence can be preserved in language domain at the output being a human-like story. The model is trained with deep learning and supervised learning with public datasets. <b>&bull; Dedicated Vision Domain:</b> To directly approach the information overload issue in vision domain, <i> Image Semantic Extraction</i> formulates a problem that selects a subset from multimedia user's photo albums. In the literature, this problem has mostly been approached with unsupervised learning process. However, in this dissertation, we develop a novel supervised learning method to attach the same problem. We specify visual semantics as a quantizable variables and can be measured, and build an encoding-decoding pipeline with Long-Short-Term-Memory (LSTM) to model this quantization process. The intuition of encoding-decoding pipeline is to imitate human: read-think-and-retell. That is, the pipeline first includes an LSTM encoder scanning all photos for "reading" comprised semantics, then concatenates with an LSTM decoder selecting the most representative ones for "thinking" the gist semantics, finally adds a dedicated residual layer revisiting the unselected ones for "verifying" if the semantics are complete enough. <b> &bull; Dedicated Language Domain:</b> Distinct from above issues, in this part, we introduce a different genre of machine learning method, unsupervised learning. We will address a semantics distillation problem in language domain, <i> Text Semantic Extraction</i>, where the semantics in a letter sequence are extracted from printed images. (Abstract shortened by ProQuest.) </p><p>
184

Efficient Actor Recovery Paradigm for Wireless Sensor and Actor Networks

Mahjoub, Reem Khalid 16 March 2018 (has links)
<p> Wireless sensor networks (WSNs) are becoming widely used worldwide. Wireless Sensor and Actor Networks (WSANs) represent a special category of WSNs wherein actors and sensors collaborate to perform specific tasks. WSANs have become one of the most preeminent emerging type of WSNs. Sensors with nodes having limited power resources are responsible for sensing and transmitting events to actor nodes. Actors are high-performance nodes equipped with rich resources that have the ability to collect, process, transmit data and perform various actions. WSANs have a unique architecture that distinguishes them from WSNs. Due to the characteristics of WSANs, numerous challenges arise. Determining the importance of factors usually depends on the application requirements. </p><p> The actor nodes are the spine of WSANs that collaborate to perform the specific tasks in an unsubstantiated and uneven environment. Thus, there is a possibility of high failure rate in such unfriendly scenarios due to several factors such as power fatigue of devices, electronic circuit failure, software errors in nodes or physical impairment of the actor nodes and inter-actor connectivity problem. It is essential to keep inter-actor connectivity in order to insure network connectivity. Thus, it is extremely important to discover the failure of a cut-vertex actor and network-disjoint in order to improve the Quality-of-Service (QoS). For network recovery process from actor node failure, optimal re-localization and coordination techniques should take place. </p><p> In this work, we propose an efficient actor recovery (EAR) paradigm to guarantee the contention-free traffic-forwarding capacity. The EAR paradigm consists of Node Monitoring and Critical Node Detection (NMCND) algorithm that monitors the activities of the nodes to determine the critical node. In addition, it replaces the critical node with backup node prior to complete node-failure which helps balances the network performance. The packet is handled using Network Integration and Message Forwarding (NIMF) algorithm that determines the source of forwarding the packets (Either from actor or sensor). This decision-making capability of the algorithm controls the packet forwarding rate to maintain the network for longer time. Furthermore, for handling the proper routing strategy, Priority-Based Routing for Node Failure Avoidance (PRNFA) algorithm is deployed to decide the priority of the packets to be forwarded based on the significance of information available in the packet. To validate the effectiveness of the proposed EAR paradigm, we compare the performance of our proposed work with state-of the art localization algorithms. Our experimental results show superior performance in regards to network life, residual energy, reliability, sensor/ actor recovery time and data recovery. </p><p>
185

Computation and Communication Optimization in Many-Core Heterogeneous Server-on-Chip

Reza, Md Farhadur 12 May 2018 (has links)
<p> To make full use of parallelism of many cores in network-on-chip (NoC) based server-on-chip, this dissertation addresses the problem of computation and communication optimization during task-resource co-allocation of large-scale applications onto heterogeneous NoCs. Both static and dynamic task mapping and resource configuration have been performed while making the solution aware of power, thermal, dark/dim silicon, and capacity issues of chip. Our objectives are to minimize energy consumption and hotspots for improving NoC performance in terms of latency and throughput while meeting the above-mentioned chip constraints. Task-resource allocation and configuration problems have been formulated using linear programming (LP) optimization for optimal solutions. Due to high time complexity of LP solutions, fast heuristic approaches are proposed to get the near-optimal mapping and configuration solutions in a finite time for many-core systems. </p><p> &bull; We first present the hotspots minimization problems and solutions in NoC based many-core server-on-chip considering both computation and communication demands of the applications while meeting the chip constraints in terms of chip area budget, computational capacity of nodes, and communication capacity of links. </p><p> &bull; We then address power and thermal limitations in dark silicon era by proposing run-time resource management strategy and mapping for minimization of both hotspots and overall chip energy in many-core NoC. </p><p> &bull; We then present the power-thermal aware load-balanced mapping in heterogeneous CPU-GPU systems in many-core NoC. Distributed resource management strategy in CPU-GPU system using CPUs for system management and latency-sensitive tasks and GPUs for throughput-intensive tasks has been proposed. </p><p> &bull; We propose a neural network model to dynamically monitor, predict, and configure NoC resources. This work applies local and global neural networks classifiers for configuring NoC based on demands of applications and chip constraints. </p><p> &bull; Due to the integration of many-cores in a single chip, we propose express channels for improving NoC performance in terms of latency and throughput. We also propose mapping methodologies for efficient task-resource co-allocation in express channel enabled many-core NoC.</p><p>
186

Algorithms for Graph Drawing Problems

He, Dayu 08 August 2017 (has links)
<p> A graph G is called <i>planar</i> if it can be drawn on the plan such that no two distinct edges intersect each other but at common endpoints. Such drawing is called a plane embedding of <i>G.</i> A plane graph is a graph with a fixed embedding. A straight-line drawing <i>G</i> of a graph <i>G</i> = (<i>V, E</i>) is a drawing where each vertex of <i>V</i> is drawn as a distinct point on the plane and each edge of <i>G</i> is drawn as a line segment connecting two end vertices. In this thesis, we study a set of planar graph drawing problems. </p><p> First, we consider the problem of <i>monotone drawing:</i> A path <i>P</i> in a straight line drawing &Gamma; is <i>monotone </i> if there exists a line l such that the orthogonal projections of the vertices of P on l appear along l in the order they appear in <i> P.</i> We call l a monotone line (or <i>monotone direction</i>) of <i>P. G</i> is called a monotone drawing of <i> G</i> if it contains at least one monotone path <i>P<sub>uw</sub></i> between every pair of vertices <i>u,w</i> of <i>G.</i> Monotone drawings were recently introduced by Angelini et al. and represent a new visualization paradigm, and is also closely related to several other important graph drawing problems. As in many graph drawing problems, one of the main concerns of this research is to reduce the drawing size, which is the size of the smallest integer grid such that every graph in the graph class can be drawn in such a grid. We present two approaches for the problem of monotone drawings of trees. Our first approach show that every <i>n</i>-vertex tree <i>T</i> admits a monotone drawing on a grid of size <i> O</i>(<i>n</i><sup>1.205</sup>) &times; <i>O</i>(<i> n</i><sup>1.205</sup>) grid. Our second approach further reduces the size of drawing to 12n &times; 12n, which is asymptotically optimal. Both of our two drawings can be constructed in <i>O(n)</i> time.</p><p> We also consider monotone drawings of 3-connected plane graphs. We prove that the classical Schnyder drawing of 3-connected plane graphs is a monotone drawing on a <i>f &times; f</i> grid, which can be constructed in <i> O(n)</i> time. </p><p> Second, we consider the problem of orthogonal drawing. An <i>orthogonal drawing</i> of a plane graph <i>G</i> is a planar drawing of <i> G</i> such that each vertex of <i>G</i> is drawn as a point on the plane, and each edge is drawn as a sequence of horizontal and vertical line segments with no crossings. Orthogonal drawing has attracted much attention due to its various applications in circuit schematics, relationship diagrams, data flow diagrams etc. . Rahman et al. gave a necessary and sufficient condition for a plane graph <i>G</i> of maximum degree 3 to have an orthogonal drawing without bends. An orthogonal drawing <i>D(G)</i> is <i> orthogonally</i> convex if all faces of <i>D(G)</i> are orthogonally convex polygons. Chang et al. gave a necessary and sufficient condition (which strengthens the conditions in the previous result) for a plane graph <i> G</i> of maximum degree 3 to have an orthogonal convex drawing without bends. We further strengthen the results such that if <i>G</i> satisfies the same conditions as in previous papers, it not only has an orthogonally convex drawing, but also a stronger star-shaped orthogonal drawing.</p><p>
187

Interactive Data Management and Data Analysis

Yang, Ying 05 August 2017 (has links)
<p> Everyone today has a big data problem. Data is everywhere and in different formats, they can be referred to as data lakes, data streams, or data swamps. To extract knowledge or insights from the data or to support decision-making, we need to go through a process of collecting, cleaning, managing and analyzing the data. In this process, data cleaning and data analysis are two of the most important and time-consuming components. </p><p> One common challenge in these two components is a lack of interaction. The data cleaning and data analysis are typically done as a batch process, operating on the whole dataset without any feedback. This leads to long, frustrating delays during which users have no idea if the process is effective. Lacking interaction, human expert effort is needed to make decisions on which algorithms or parameters to use in the systems for these two components. </p><p> We should teach computers to talk to humans, not the other way around. This dissertation focuses on building systems --- Mimir and CIA --- that help user conduct data cleaning and analysis through interaction. Mimir is a system that allows users to clean big data in a cost- and time-efficient way through interaction, a process I call on-demand ETL. Convergent inference algorithms (CIA) are a family of inference algorithms in probabilistic graphical models (PGM) that enjoys the benefit of both exact and approximate inference algorithms through interaction. </p><p> Mimir provides a general language for user to express different data cleaning needs. It acts as a shim layer that wraps around the database making it possible for the bulk of the ETL process to remain within a classical deterministic system. Mimir also helps users to measure the quality of an analysis result and provides rankings for cleaning tasks to improve the result quality in a cost efficient manner. CIA focuses on providing user interaction through the process of inference in PGMs. The goal of CIA is to free users from the upfront commitment to either approximate or exact inference, and provide user more control over time/accuracy trade-offs to direct decision-making and computation instance allocations. This dissertation describes the Mimir and CIA frameworks to demonstrate that it is feasible to build efficient interactive data management and data analysis systems.</p><p>
188

Multi-level behavioral self-organization in computer-animated lifelike synthetic agents

Qin, Hong 01 January 1999 (has links)
No description available.
189

Strong-DISM| A First Attempt to a Dynamically Typed Assembly Language (D-TAL)

Hernandez, Ivory 05 December 2017 (has links)
<p> Dynamically Typed Assembly Language (D-TAL) is not only a lightweight and effective solution to the gap generated by the drop in security produced by the translation of high-level language instructions to low-level language instructions, but it considerably eases up the burden generated by the level of complexity required to implement typed assembly languages statically. Although there are tradeoffs between the static and dynamic approaches, focusing on a dynamic approach leads to simpler, easier to reason about, and more feasible ways to understand deployment of types over monomorphically-typed or untyped intermediate languages. On this occasion, DISM, a simple but powerful and mature untyped assembly language, is extended by the addition of type annotations (on memory and registers) to produce an instance of D-TAL. Strong-DISM, the resulting language, statically, lends itself to simpler analysis about type access and security as the correlation between datatypes and instructions with their respective memory and registers becomes simpler to observe; while dynamically, it disallows operations and further eliminates conditions that from high level languages could be used to violate/circumvent security.</p><p>
190

Programming QR code scanner, communicating Android devices, and unit testing in fortified cards

Patil, Aniket V. 07 December 2017 (has links)
<p> In the contemporary world, where smartphones have become an essential part of our day-to-day lives, Fortified Cards aims to let people monitor the security of their payments using their smartphones. Fortified Cards, as a project, is an endeavor to revolutionize credit or debit card payments using the Quick Response (QR) technology and the International Mobile Equipment Identity (IMEI) number. </p><p> The emphasis in the Android application of Fortified Cards is on the QR technology, communication between two Android devices, and testing the application under situations that could possibly have a negative impact on the successful implementation of the project. The documentation of the project exemplifies the working of the application in a graphical format using an activity diagram, which is a step-by-step guide for any developer to gain a better insight and the detailed description of the successful implementation of the project.</p><p>

Page generated in 0.0732 seconds