• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 181
  • 34
  • 24
  • 22
  • 15
  • 6
  • 5
  • 5
  • 4
  • 3
  • 3
  • 2
  • 2
  • 1
  • 1
  • Tagged with
  • 358
  • 82
  • 65
  • 52
  • 51
  • 48
  • 45
  • 43
  • 42
  • 39
  • 37
  • 32
  • 32
  • 31
  • 30
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
11

A CELLULAR PHONE-CENTRIC MOBILE NETWORK ARCHITECTURE FOR WIRELESS SMALL SATELLITE TELEMETRY SYSTEM

Li, Mingmei, Guo, Qing 10 1900 (has links)
International Telemetering Conference Proceedings / October 20-23, 2003 / Riviera Hotel and Convention Center, Las Vegas, Nevada / This paper aims to add the information access capabilities to enable user’s mobile terminals in a wireless small satellite telemetry system. The cellular phone-centric mobile network architecture provides wireless communication link; telemetry information is provided to users in a highly personalized form according to the end-user’s range. We choose a reference system-level model of network architecture and compare its performance with common small satellite telemetry network link; evaluation results derived using a known analytical model. The result of original hypotheses, network architecture’s prototype includes both analytical performance evaluation and simulation techniques, are discussed in detail.
12

CCFS cryptographically curated file system

Goldman, Aaron David 07 January 2016 (has links)
The Internet was originally designed to be a next-generation phone system that could withstand a Soviet attack. Today, we ask the Internet to perform tasks that no longer resemble phone calls in the face of threats that no longer resemble Soviet bombardment. However, we have come to rely on names that can be subverted at every level of the stack or simply be allowed to rot by their original creators. It is possible for us to build networks of content that serve the content distribution needs of today while withstanding the hostile environment that all modern systems face. This dissertation presents the Cryptographically Curated File System (CCFS), which offers five properties that we feel a modern content distribution system should provide. The first property is Strong Links, which maintains that only the owner of a link can change the content to which it points. The second property, Permissionless Distribution, allows anyone to become a curator without dependence on a naming or numbering authority. Third, Independent Validation arises from the fact that the object seeking affirmation need not choose the source of trust. Connectivity, the fourth property, allows any curator to delegate and curate the right to alter links. Each curator can delegate the control of a link and that designee can do the same, leaving a chain of trust from the original curator to the one who assigned the content. Lastly, with the property of Collective Confidence, trust does not need to come from a single source, but can instead be an aggregate affirmation. Since CCFS embodies all five of these properties, it can serve as the foundational technology for a more robust Web. CCFS can serve as the base of a web that performs the tasks of today’s Web, but also may outperform it. In the third chapter, we present a number of scenarios that demonstrate the capacity and potential of CCFS. The system can be used as a publication platform that has been re-optimized within the constraints of the modern Internet, but not the constraints of decades past. The curated links can still be organized into a hierarchical namespace (e.g., a Domain Naming System (DNS)) and de jure verifications (e.g., a Certificate Authority (CA) system), but also support social, professional, and reputational graphs. This data can be distributed, versioned, and archived more efficiently. Although communication systems were not designed for such a content-centric system, the combination of broadcasts and point-to-point communications are perfectly suited for scaling the distribution, while allowing communities to share the burdens of hosting and maintenance. CCFS even supports the privacy of friend-to-friend networks without sacrificing the ability to interoperate with the wider world. Finally, CCFS does all of this without damaging the ability to operate search engines or alert systems, providing a discovery mechanism, which is vital to a usable, useful web. To demonstrate the viability of this model, we built a research prototype. The results of these tests demonstrate that while the CCFS prototype is not ready to be used as a drop-in replacement for all file system use cases, the system is feasible. CCFS is fast enough to be usable and can be used to publish, version, archive, and search data. Even in this crude form, CCFS already demonstrates advantages over previous state-of-the-art systems. When the Internet was designed, there were relatively fewer computers that were far weaker than the computers we have now. They were largely connected to each other over reliable connections. When the Internet was first created, computing was expensive and propagation delay was negligible. Since then, the propagation delay has not improved on a Moore’s Law Curve. Now, latency has come to dominate all other costs of retrieving content; specifically, the propagation time has come to dominate the latency. In order to improve the latency, we are paying more for storage, processing, and bandwidth. The only way to improve propagation delay is to move the content closer to the destination. In order to have the content close to the demand, we store multiple copies and search multiple locations, thus trading off storage, bandwidth, and processing for lower propagation delay. The computing world should re-evaluate these trade-offs because the situation has changed. We need an Internet that is designed for the technologies used today, rather than the tools of the 20th century. CCFS, which regards the trade-off for lower propagation delay, will be better suited for 21st-century technologies. Although CCFS is not preferable in all situations, it can still offer tremendous value. Better robustness, performance, and democracy make CCFS a contribution to the field. Robustness comes from the cryptographic assurances provided by the five properties of CCFS. Performance comes from the locality of content. Democracy arises from the lack of a centralized authority that may grant the right of Free Speech only to those who espouse rhetoric compatible with their ideals. Combined, this model for a cryptographically secure, content-centric system provides a novel contribution to the state of communications technology and information security.
13

User-centric quality of service provisioning in IP networks

Culverhouse, Mark January 2012 (has links)
The Internet has become the preferred transport medium for almost every type of communication, continuing to grow, both in terms of the number of users and delivered services. Efforts have been made to ensure that time sensitive applications receive sufficient resources and subsequently receive an acceptable Quality of Service (QoS). However, typical Internet users no longer use a single service at a given point in time, as they are instead engaged in a multimedia-rich experience, comprising of many different concurrent services. Given the scalability problems raised by the diversity of the users and traffic, in conjunction with their increasing expectations, the task of QoS provisioning can no longer be approached from the perspective of providing priority to specific traffic types over coexisting services; either through explicit resource reservation, or traffic classification using static policies, as is the case with the current approach to QoS provisioning, Differentiated Services (Diffserv). This current use of static resource allocation and traffic shaping methods reveals a distinct lack of synergy between current QoS practices and user activities, thus highlighting a need for a QoS solution reflecting the user services. The aim of this thesis is to investigate and propose a novel QoS architecture, which considers the activities of the user and manages resources from a user-centric perspective. The research begins with a comprehensive examination of existing QoS technologies and mechanisms, arguing that current QoS practises are too static in their configuration and typically give priority to specific individual services rather than considering the user experience. The analysis also reveals the potential threat that unresponsive application traffic presents to coexisting Internet services and QoS efforts, and introduces the requirement for a balance between application QoS and fairness. This thesis proposes a novel architecture, the Congestion Aware Packet Scheduler (CAPS), which manages and controls traffic at the point of service aggregation, in order to optimise the overall QoS of the user experience. The CAPS architecture, in contrast to traditional QoS alternatives, places no predetermined precedence on a specific traffic; instead, it adapts QoS policies to each individual’s Internet traffic profile and dynamically controls the ratio of user services to maintain an optimised QoS experience. The rationale behind this approach was to enable a QoS optimised experience to each Internet user and not just those using preferred services. Furthermore, unresponsive bandwidth intensive applications, such as Peer-to-Peer, are managed fairly while minimising their impact on coexisting services. The CAPS architecture has been validated through extensive simulations with the topologies used replicating the complexity and scale of real-network ISP infrastructures. The results show that for a number of different user-traffic profiles, the proposed approach achieves an improved aggregate QoS for each user when compared with Best effort Internet, Traditional Diffserv and Weighted-RED configurations. Furthermore, the results demonstrate that the proposed architecture not only provides an optimised QoS to the user, irrespective of their traffic profile, but through the avoidance of static resource allocation, can adapt with the Internet user as their use of services change.
14

Evaluating the claims of network centric warfare

Thomas, Jeffrey Alexander 12 1900 (has links)
Human Systems Integration Report / In response to technological advances, Network Centric Warfare (NCW) emerged as a theory to leverage the technology available in todayâ s world. Advocates of NCW claim that technology will improve information sharing by â â µrobustly networking a forceâ , thereby improving mission effectiveness. This study proposes a methodology with which to test the first tenet of NCW: a robustly networked force improves information sharing. Lessons learned from Human Systems Integration (HSI) demonstrate that in order to improve mission effectiveness, characteristics of both the human and the technology must be considered. As such, the impact of human characteristics and traits on mission effectiveness, as measured by individual and team performance, are assessed using a computer simulation, C3Fire. Results at the individual level suggest that persons scoring high on extraversion and low on pessimism perform better than those scoring low on extraversion and high on pessimism. In contrast, at the team level, homogenous teams as measured by optimism-pessimism perform worse than diverse teams. Results of this thesis provide a methodology with which to examine NCWâ s claims in a laboratory setting. Preliminary evidence demonstrates the need to consider human characteristics and traits in the design and composition of network teams.
15

The benefit of 802.20 technologies on information flow in network centric warfare

Huffaker, Jacob A. 09 1900 (has links)
"This thesis will focus on the area of 802.20 wireless networking and how this technology will vastly benefit the US military forces, especially in the Network Centric concept of operations, where information flow is crucial. It will investigate this technology using published literature and previously gathered experimental data. This thesis will then relate its findings to Network Centric Warfare and the matters that could be most affected by this new technology." p. i.
16

Management of Big Annotations in Relational Database Management Systems

Ibrahim, Karim 24 April 2014 (has links)
Annotations play a key role in understanding and describing the data, and annotation management has become an integral component in most emerging applications such as scientific databases. Scientists need to exchange not only data but also their thoughts, comments and annotations on the data as well. Annotations represent comments, Lineage of data, description and much more. Therefore, several annotation management techniques have been proposed to efficiently and abstractly handle the annotations. However, with the increasing scale of collaboration and the extensive use of annotations among users and scientists, the number and size of the annotations may far exceed the size of the original data itself. However, current annotation management techniques don’t address large scale annotation management. In this work, we propose three chapters to that tackle the Big annotations from three different perspectives (1) User-Centric Annotation Propagation, (2) Proactive Annotation Management and (3) InsightNotes Summary-Based Querying. We capture users' preferences in profiles and personalizes the annotation propagation at query time by reporting the most relevant annotations (per tuple) for each user based on time plan. We provide three Time-Based plans, support static and dynamic profiles for each user. We support a proactive annotation management which suggests data tuples to be annotated in case new annotation has a reference to a data value and user doesn’t annotate the data precisely. Moreover, we provide an extension on the InsightNotes: Summary-Based Annotation Management in Relational Databases by adding query language that enable the user to query the annotation summaries and add predicates on the annotation summaries themselves. Our system is implemented inside PostgreSQL.
17

Architecture Centric Commenting on ERP System Development - Using SUNON Company as An Example

Yu, Bing-wen 12 January 2004 (has links)
none
18

The Study of The Network-Centric Innovation Model of Web2.0

Chiou, Chih-ming 23 July 2009 (has links)
none
19

An assessment, survey, and systems engineering design of information sharing and discovery systems in a network-centric environment

De Soto, Kristine M. January 2009 (has links) (PDF)
Thesis (M.S. in Systems Engineering)--Naval Postgraduate School, December 2009. / Thesis Advisor(s): Goshorn, Rachel E. Second Reader: Shebalin, Paul V. "December 2009." Description based on title screen as viewed on January 27, 2010. Author(s) subject terms: Systems engineering, systems architecture, network-centric systems, network-centric warfare, NCW, network-centric operations, NCO, information sharing, information discovery. Includes bibliographical references (p. 101-103). Also available in print.
20

Clinical estimation of condylar translation associated with non-coincidence of centric relation and centric occlusion

Setchell, Derrick J. January 1976 (has links)
Thesis (M.S.)--University of Michigan, Ann Arbor, 1976. / Typescript (photocopy). Includes bibliographical references (leaves 72-75). Also issued in print.

Page generated in 0.0662 seconds