• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 37
  • Tagged with
  • 38
  • 38
  • 38
  • 38
  • 7
  • 3
  • 3
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

Voice activated personal assistant| Privacy concerns in the public space

Easwara Moorthy, Aarthi 23 April 2014 (has links)
No description available.
32

Information technology programming standards and annual project maintenance costs

Mynyk, John 15 February 2014 (has links)
<p> Organizations that depend on the use of IT in their business models must maintain their systems and keep their systems current to survive (Filipek, 2008; Kulkarni, Kumar, Mookerjee, &amp; Sethi, 2009; Unterkalmsteiner et al., 2012). As most IT departments allocate as much as 80% of their budget to maintain stability while leaving only the other 20% to allow improvements (Telea et al., 2010), high cost of stability may be a reason many IT organizations cannot afford efficient staffing and even jeopardize the existence of the organization (Filipek, 2008; Talib, Abdullah, Atan, &amp; Murad, 2010). The purpose of this exploratory mixed methods study was to discover the IT programming standards used in IT departments that predict a decrease in project maintenance costs. This study employed an exploratory mixed methods data collection and analysis to develop and test a collection of universal programming standards. The qualitative portion of the study resulted in a list of IT programming standards from the Fortune 20 companies of 2011. Surveyed from IT departments in the Fortune 500 companies of 2011, the quantitative portion of this study correlate the degree of enforcement of each IT programming standard to a decrease in average project maintenance costs using a backward stepwise regression. Using a 95% confidence interval and a 5% margin of error (&alpha; = .05), the backward stepwise regression discarded 18 of the 22 IT programming standards. The remaining correlations give evidence that a) the more the department enforces waiting for feedback the higher the maintenance costs, b) the more the department enforces having the architectural team develop coding guidelines the lower the maintenance costs, and c) the more the IT department enforces the following of change management procedures, the higher the maintenance costs.</p>
33

User-defined key pair protocol

Hassan, Omar 26 February 2014 (has links)
<p> E-commerce applications have flourished on the Internet because of their ability to perform secure transactions in which the identities of the two parties could be verified and the communications between them encrypted. The Transport Layer Security (TLS) protocol is implemented to make secure transactions possible by creating a secure tunnel between the user's browser and the server with the help of Certificate Authorities (CAs). CAs are a third party that can be trusted by both the user's browser and the server and are responsible for establishing secured communication between them. The major limitation of this model is the use of CAs as single points of trust that can introduce severe security breaches globally. In my thesis, I provide a high-level design for a new protocol in the application layer of the TCP/IP suite that will build a secure tunnel between the user's browser and the server without the involvement of any third party. My proposed protocol is called User-Defined Key Pair (UDKP), and its objective is to build a secure tunnel between the user's browser and the server using a public/private key pair generated for the user on the fly inside the user's browser based on the user credential information. This key pair will be used by the protocol instead of the server certificate as the starting point for creating the secure tunnel.</p>
34

Factors Influencing the Adoption of Cloud Computing Driven by Big Data Technology| A Quantitative Study

Chowdhury, Naser 25 August 2018 (has links)
<p>A renewed interest in cloud computing adoption has occurred in academic and industry settings because emerging technologies have strong links to cloud computing and Big Data technology. Big Data technology is driving cloud computing adoption in large business organizations. For cloud computing adoption to increase, cloud computing must transition from low-level technology to high-level business solutions. The purpose of this study was to develop a predictive model for cloud computing adoption that included Big Data technology-related variables, along with other variables from two widely used technology adoption theories: technology acceptance model (TAM), and technology-organization-environment (TOE). The inclusion of Big Data technology-related variables extended the cloud computing?s mix theory adoption approach. The six independent variables were perceived usefulness, perceived ease of use, security effectiveness, the cost-effectiveness, intention to use Big Data technology, and the need for Big Data technology. Data collected from 182 U.S. IT professionals or managers were analyzed using binary logistic regression. The results showed that the model involving six independent variables was statistically significant for predicting cloud computing adoption with 92.1% accuracy. Independently, perceived usefulness was the only predictor variable that can increase cloud computing adoption. These results indicate that cloud computing may grow if it can be leveraged into the emerging Big Data technology trends to make cloud computing more useful for its users.
35

Enhancing the Internet of Things Architecture with Flow Semantics

DeSerranno, Allen Ronald 02 December 2017 (has links)
<p> Internet of Things (&lsquo;IoT&rsquo;) systems are complex, asynchronous solutions often comprised of various software and hardware components developed in isolation of each other. These components function with different degrees of reliability and performance over an inherently unreliable network, the Internet. Many IoT systems are developed within silos that do not provide the ability to communicate or be interoperable with other systems and platforms. Literature exists on how these systems should be designed, how they should interoperate, and how they could be improved, but practice does not always consult literature. </p><p> The work brings together a proposed reference architecture for the IoT and engineering practices for flow semantics found in existing literature with a commercial implementation of an IoT platform. It demonstrates that the proposed IoT reference architecture and flow-service-quality engineering practices when integrated together can produce a more robust system with increased functionality and interoperability. It shows how such practices can be implemented into a commercial solution, and explores the value provided to the system when implemented. This work contributes to the current understanding of how complex IoT systems can be developed to be more reliable and interoperable using reference architectures and flow semantics. The work highlights the value of integration of academic solutions with commercial implementations of complex systems.</p><p>
36

Contention Alleviation in Network-on-Chips

Xiang, Xiyue 21 December 2017 (has links)
<p>In a network-on-chip (NoC) based system, the NoC is a shared resource among multiple processor cores. Requests generated by different applications running on different cores can create severe contention in NoCs. This contention can jeopardize the system performance and power efficiency in many different formats. First and foremost, we discover that the contention in NoCs can induce inter-application interference, leading to overall system performance degradation, prevent fair-progress of different applications, and cause starvation of unfairly-treated applications. We propose the NoC Application Slowdown (NAS) Model, the first online model that accurately estimates how much network delays due to interference contribute to the overall stall time of each application. We use NAS to develop Fairness-Aware Source Throttling (FAST), a mechanism that employs slowdown predictions to control the network injection rates of applications in a way that minimizes system unfairness. Furthermore, although removing buffers from the constituent routers can reduce power consumption and hardware complexity, the bufferless NoC is subject to the growing deflection caused by contention, leading to severe performance degradation and squandering power-saving potential. we then propose Deflection Containment (DeC) for the bufferless NoC to address its notorious shortcoming of excessive deflection for performance improvement and power reduction. With a link added to each router for bridging subnetworks (whose aggregated link width equals a give value, say, 128b), DeC lets a contending flit in one subnetwork be forwarded to another subnetwork instead of deflected, yielding extraordinary deflection reduction and greatly enriching path diversity. In addition, router microarchitecture under DeC is rectified to shorten the critical path and lift network bandwidth. Last but not least, beside 1-to-1 flow, the growing core counts urgently requires effective hardware support to alleviate the contention caused by 1-to-many and many-to-1 flow. We propose Carpool, the very first bufferless NoC optimized for 1-to-many and many-to-1 traffic. Carpool adaptively forks new flit replicas and performs traffic aggregation at appropriate intermediate routers to lessen bandwidth demands and reduce contention. We propose the microarchitecture of Carpool routers and develop parallel port allocation which supports multicast and reduces critical paths to improve network bandwidth.
37

Computational methods for multi-omic models of cell metabolism and their importance for theoretical computer science

Angione, Claudio January 2015 (has links)
To paraphrase Stan Ulam, a Polish mathematician who became a leading figure in the Manhattan Project, in this dissertation I focus not only on how computer science can help biologists, but also on how biology can inspire computer scientists. On one hand, computer science provides powerful abstraction tools for metabolic networks. Cell metabolism is the set of chemical reactions taking place in a cell, with the aim of maintaining the living state of the cell. Due to the intrinsic complexity of metabolic networks, predicting the phenotypic traits resulting from a given genotype and metabolic structure is a challenging task. To this end, mathematical models of metabolic networks, called genome-scale metabolic models, contain all known metabolic reactions in an organism and can be analyzed with computational methods. In this dissertation, I propose a set of methods to investigate models of metabolic networks. These include multi-objective optimization, sensitivity, robustness and identifiability analysis, and are applied to a set of genome-scale models. Then, I augment the framework to predict metabolic adaptation to a changing environment. The adaptation of a microorganism to new environmental conditions involves shifts in its biochemical network and in the gene expression level. However, gene expression profiles do not provide a comprehensive understanding of the cellular behavior. Examples are the cases in which similar profiles may cause different phenotypic outcomes, while different profiles may give rise to similar behaviors. In fact, my idea is to study the metabolic response to diverse environmental conditions by predicting and analyzing changes in the internal molecular environment and in the underlying multi-omic networks. I also adapt statistical and mathematical methods (including principal component analysis and hypervolume) to evaluate short term metabolic evolution and perform comparative analysis of metabolic conditions. On the other hand, my vision is that a biomolecular system can be cast as a ?biological computer?, therefore providing insights into computational processes. I therefore study how computation can be performed in a biological system by proposing a map between a biological organism and the von Neumann architecture, where metabolism executes reactions mapped to instructions of a Turing machine. A Boolean string represents the genetic knockout strategy and also the executable program stored in the ?memory? of the organism. I use this framework to investigate scenarios of communication among cells, gene duplication, and lateral gene transfer. Remarkably, this mapping allows estimating the computational capability of an organism, taking into account also transmission events and communication outcomes.
38

Measurement-driven characterization of the mobile environment

Soroush, Hamed 01 January 2013 (has links)
The concurrent deployment of high-quality wireless networks and large-scale cloud services offers the promise of secure ubiquitous access to seemingly limitless amount of content. However, as users' expectations have grown more demanding, the performance and connectivity failures endemic to the existing networking infrastructure have become more apparent. These problems are in general exacerbated by user mobility. The work presented in this dissertation demonstrates that performance of services for mobile users is significantly affected by environmental factors that are hard to characterize ahead of deployment. This work includes development and evaluation of large-scale mobile experimentation infrastructures (DOME, GENI) that facilitate longitudinal studies of today's technologically diverse mobile environment over a period of four years. Based on the insights gained from these studies, a mechanism called Spider is presented that provides for efficiently utilizing Wi-Fi deployments in highly mobile scenarios to achieve higher throughput and improved connectivity. This work presents the first in-depth analysis of the performance of attempting concurrent AP connections from highly mobile clients. Spider provides a 400% improvement in throughput and 54% improvement in connectivity over stock Wi-Fi implementations. The last part of this dissertation demonstrates there are predictable differences among performances of a cellular network in different geographical locations of a town. Consequently, patterns of data transmission between a server on the Internet and a moving cell phone can reveal the geographic travel path of that phone. While the GPS and location-awareness features on phones explicitly share this information, phone users will likely be surprised to learn that disabling these features does not suffice to prevent a remote server from determining their travel path. We showed that a simple HMM-based classifier can discover and exploit features of the geography surrounding possible travel paths to determine the path a phone took, using only data visible at the remote server on the Internet. Having gathered hundreds of traces over a large geographic area, we showed that the HMM-based technique is able to distinguish mobile phones from stationary phones with up to 94.7% accuracy. Routes taken by each mobile phone could be distinguished with up to 75.9% accuracy using the same technique. This dissertation proposes new tools and techniques for characterization of the impact of the environment on the performance of mobile networks. The concrete set of results and insights gained from this work demonstrates mechanisms for improving connectivity and throughput in highly mobile scenarios while at the same time raising new challenges for maintaining the privacy of mobile users.

Page generated in 0.1677 seconds