• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 142
  • 19
  • 9
  • 8
  • 2
  • 1
  • 1
  • Tagged with
  • 191
  • 191
  • 191
  • 95
  • 61
  • 34
  • 24
  • 23
  • 22
  • 21
  • 21
  • 20
  • 19
  • 18
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
91

Application development using client-server technology /

Chowdhury, Evan, January 2004 (has links) (PDF)
Thesis (M.S.) in Computer Engineering--University of Maine, 2004. / Includes vita. Includes bibliographical references (leaves 50-51).
92

The optimum communications architecture for deep level gold mining

Miller, Mark Henry Bruce. January 2000 (has links)
Thesis (M.Eng.(Electrical Engineering))--University of Pretoria, 2000. / Includes abstract in English. Includes bibliographical references.
93

Multiple antenna systems in a mobile-to-mobile environment

Kang, Heewon. January 2006 (has links)
Thesis (Ph. D.)--Electrical and Computer Engineering, Georgia Institute of Technology, 2007. / Gordon L. Stuber, Committee Chair ; Guillermo Goldsztein, Committee Member ; Gregory D. Durgin, Committee Member ; John R, Barry, Committee Member ; Mary Ann Ingram, Committee Member.
94

Application Development Using Client-Server Technology

Chowdhury, Evan January 2004 (has links) (PDF)
No description available.
95

A REFERENCE ARCHITECTURE FOR NETWORK FUNCTION VIRTUALIZATION

Unknown Date (has links)
Cloud computing has provided many services to potential consumers, one of these services being the provision of network functions using virtualization. Network Function Virtualization is a new technology that aims to improve the way we consume network services. Legacy networking solutions are different because consumers must buy and install various hardware equipment. In NFV, networks are provided to users as a software as a service (SaaS). Implementing NFV comes with many benefits, including faster module development for network functions, more rapid deployment, enhancement of the network on cloud infrastructures, and lowering the overall cost of having a network system. All these benefits can be achieved in NFV by turning physical network functions into Virtual Network Functions (VNFs). However, since this technology is still a new network paradigm, integrating this virtual environment into a legacy environment or even moving all together into NFV reflects on the complexity of adopting the NFV system. Also, a network service could be composed of several components that are provided by different service providers; this also increases the complexity and heterogeneity of the system. We apply abstract architectural modeling to describe and analyze the NFV architecture. We use architectural patterns to build a flexible NFV architecture to build a Reference Architecture (RA) for NFV that describe the system and how it works. RAs are proven to be a powerful solution to abstract complex systems that lacks semantics. Having an RA for NFV helps us understand the system and how it functions. It also helps us to expose the possible vulnerabilities that may lead to threats toward the system. In the future, this RA could be enhanced into SRA by adding misuse and security patterns for it to cover potential threats and vulnerabilities in the system. Our audiences are system designers, system architects, and security professionals who are interested in building a secure NFV system. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2020. / FAU Electronic Theses and Dissertations Collection
96

Next Generation Cloud Computing Architectures: Performance and Pricing

Mahajan, Kunal January 2021 (has links)
Cloud providers need to optimize the container deployments to efficiently utilize their network, compute and storage resources. In addition, they require an attractive pricing strategy for the compute services like containers, virtual machines, and serverless computing in order to attract users, maximize their profits and achieve a desired utilization of their resources. This thesis aims to tackle the twofold challenge of achieving high performance in container deployments and identifying the pricing for compute services. For performance, the thesis presents a transport-adaptive network architecture (D-TAIL) improving tail latencies. Existing transport protocols such as Homa, pFabric [1, 2] utilize Shortest Remaining Processing Time (SRPT) scheduling policy which is known to have starvation issues for long flows as SRPT prioritizes short flows. D-TAIL addresses this limitation by taking age of the flow in consideration while deciding the priority. D-TAIL shows a maximum reduction of 72%, 29.66% and 28.39% in 99th-percentile FCT for transport protocols like DCTCP, pFabric and Homa respectively. In addition, the thesis also presents a container deployment design utilizing peer-to-peer network and virtual file system with content-addressable storage to address the problem of cold starts in existing container deployment systems. The proposed deployment design increases compute availability, reduces storage requirement and prevents network bottlenecks. For pricing, the thesis studies the tradeoffs between serverless computing (SC) and traditional cloud computing (virtual machine, VM) using realistic cost models, queueing theoretic performance models, and a game theoretic formulation. For customers, we identify their workload distribution between SC and VM to minimize their cost while maintaining a particular performance constraint. For cloud provider, we identify the SC and VM prices to maximize its profit. The main result is the identification and characterization of three optimal operational regimes for both customers and the provider, that leverage either SC or VM only, or both, in a hybrid configuration.
97

A research model to improve understanding of the extent of usage of enterprise resource planning systems in a university

Mudaly, Sherwin 03 October 2014 (has links)
Submitted in fulfillment of the requirements of the Master of Technology Degree in Information Technology, Durban University of Technology Durban, South Africa, 2013. / This study reports on the development of a model for the improvement of understanding the extent of Enterprise Resource Planning system usage at the Durban University of Technology. Previous research revealed that university ERP systems are not fully utilized by end-users, resulting in low usage and institutional inefficiencies. Consequently this leads to stakeholders (particularly students and government) pressurizing universities to improve their efficiency and performance. To address the problem, this study developed a research model by adapting the TAM2 theoretical model with additional IT usage factors of training, management support, perceived behavioural control and technical support. A dataset of 312 full time academics was generated by a survey method. Partial Least Square (PLS) technique was used to determine the predictive power of the developed research model which was then compared to other adoption and usage models to determine its superiority. The model was empirically tested and the findings demonstrated an improvement on the model predictive power as a result of the additional IT usage factors and the interaction effect of gender, age and experience. The predictive power comparison shows that the research model better explained 23% of the variability in ERP system usage compared to the original TAM2 model of 3.6% and the original TPB model of 5.2%. With the exception of management support, the additional IT usage factors of training, technical support and perceived behavioural control were found to have a significant relationship with ERP system usage. The test of gender, experience and age interaction effect revealed that gender and experience moderated the relationship between the independent factors of technical support and management support on the dependent factor of ERP system usage. In addition gender moderated the effect of perceived behavioural control on ERP system usage but not the effect of training which was however moderated by experience. Age did not moderate the relationship between the additional IT usage factors and ERP system usage. Consequently, the Durban University of Technology will have to address these additional IT usage factors and gender and experience interaction effect more precisely in its attempt to improve ERP system usage.
98

Design and performance evaluation of a proposed backbone network for PC-Networks interconnection

Fang, Jun-Wai, 1960- January 1989 (has links)
This thesis is concerned with the design of a high-speed backbone network which provides a high bandwidth interconnection for various Personal Computer Networks (PC-Networks) with an integrated service of voice and data. With the advanced technology of optical fiber as the transmission medium, several different existing topologies and protocols are discussed for the backbone network design. The token ring protocol is simulated and evaluated to find out a suitable buffer size and the length of voice and data packet for backbone network. The Network II.5 simulation tool is applied to simulate the token ring simulation model with different parameters. The Network Interface Unit (NIU) is designed from the simulation results with a cost-effective consideration.
99

Permutation-based data compression

Unknown Date (has links)
The use of permutations in data compression is an aspect that is worthy of further exploration. The work that has been done in video compression based on permutations was primarily oriented towards lossless algorithms. The study of previous algorithms has led to a new algorithm that could be either lossless or lossy, for which the amount of compression and the quality of the output can be controlled. The lossless version of our algorithm performs close to lossy versions of H.264 and it improves on them for the majority of the videos that we analyzed. Our algorithm could be used in situations where there is a need for lossless compression and the video sequences are part of a single scene, e.g., medical videos, where loss of information could be risky or expensive. Some results on permutations, which may be of independent interest, arose in developing this algorithm. We report on these as well. / by Amalya Mihnea. / Thesis (Ph.D.)--Florida Atlantic University, 2011. / Includes bibliography. / Electronic reproduction. Boca Raton, Fla., 2011. Mode of access: World Wide Web.
100

Modeling and analysis of security

Unknown Date (has links)
Cloud Computing is a new computing model consists of a large pool of hardware and software resources on remote datacenters that are accessed through the Internet. Cloud Computing faces significant obstacles to its acceptance, such as security, virtualization, and lack of standardization. For Cloud standards, there is a long debate about their role, and more demands for Cloud standards are put on the table. The Cloud standardization landscape is so ambiguous. To model and analyze security standards for Cloud Computing and web services, we have surveyed Cloud standards focusing more on the standards for security, and we classified them by groups of interests. Cloud Computing leverages a number of technologies such as: Web 2.0, virtualization, and Service Oriented Architecture (SOA). SOA uses web services to facilitate the creation of SOA systems by adopting different technologies despite their differences in formats and protocols. Several committees such as W3C and OASIS are developing standards for web services; their standards are rather complex and verbose. We have expressed web services security standards as patterns to make it easy for designers and users to understand their key points. We have written two patterns for two web services standards; WS-Secure Conversation, and WS-Federation. This completed an earlier work we have done on web services standards. We showed relationships between web services security standards and used them to solve major Cloud security issues, such as, authorization and access control, trust, and identity management. Close to web services, we investigated Business Process Execution Language (BPEL), and we addressed security considerations in BPEL and how to enforce them. To see how Cloud vendors look at web services standards, we took Amazon Web Services (AWS) as a case-study. By reviewing AWS documentations, web services security standards are barely mentioned. We highlighted some areas where web services security standards could solve some AWS limitations, and improve AWS security process. Finally, we studied the security guidance of two major Cloud-developing organizations, CSA and NIST. Both missed the quality of attributes offered by web services security standards. We expanded their work and added benefits of adopting web services security standards in securing the Cloud. / Includes bibliography. / Dissertation (Ph.D.)--Florida Atlantic University, 2013.

Page generated in 0.0726 seconds