• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 164
  • 164
  • 164
  • 50
  • 28
  • 27
  • 27
  • 23
  • 22
  • 21
  • 20
  • 18
  • 18
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

CROSS-DOMAIN MESSAGE ORIENTED INTEROPERABILITY FRAMEWORK

SHARIFABADI, DEHMOOBAD AZIN January 2009 (has links)
<p>The variety and heterogeneity of information communication standards in different application domains are the main sources of complexity in interoperability provision among those application domains. The maturity of application domains can be assessed by the ease of communication of terms between different stakeholders in the same domain, which is central in defining standards for communication of information among organizatiuns. Currently. most research activities are focused towards standardization and illteroperabiliry among information systems within the same domain.</p> <p><br />However. an emerging challenge is to address the exchange of information among heterogeneous applications in different domains, such as healthcare and insurance. This requires data extraction to obtain common subsets of information in the collaborating domains. The second step would be to provide intra-domain and inter-domain semantic interoperability through proprietary and shared ontology systems.</p> <p><br />In this context, we address the above challenges through description of a framework that employs healthcare standard development frameworks and clinical terminology systems to achieve semantic interoperability between distributed systems among different application domains. A real world case study, which addresses message-oriented integration of business processes between healthcare and insurance is demonstrated.</p> / Master of Applied Science (MASc)
2

Examination of the Use of Heads-up Displays in Driving

Williams, Matthew 01 January 2017 (has links)
Thousands of preventable automobile accidents occur each year. Many of these accidents are the result of driver distraction, in which the driver’s attention is diverted from the road. This distraction can come from several different places; sometimes it comes from drivers looking down at a speedometer or dashboard display. Other times it can come from the use of a mobile device for texting or navigation. While things like texting are clearly an unacceptable action to perform while operating a vehicle, others are necessary for proper operation of a car. Still others, like navigation, are extremely convenient and have become an integral part of day to day life. It seems highly possible that such convenient driving aids as well as necessary measures such as speed could be incorporated into a heads-up display for a safer driving experience.
3

Future of Payment Platforms

Youssefzadeh, Salim Benjamin 01 June 2014 (has links)
With the vast increase in smartphones, there have been an increasing number of opportunities growing in the app industry. One in particular is the way we deal with money. There are huge overheads in the current payment systems around the world particularly in the United States, many of which include large transaction fees. Many new businesses have grown to solve these inefficiencies and create a new platform that provides a new user experience, security, and convenience among many other things. However, many of these platforms are still centralized, making them more susceptible to attacks. This thesis goes over the various methods of payments, starting from their origins and discusses their flaws and ways they are being improved. This study explains where payment platforms are going and how they line up against other platforms in terms of security and usability. We look at the origins of credit cards and why the US is lagging behind other countries in credit card security. Digital wallets like PayPal, Venmo, Square, etc. have done a remarkable job, but still have room for improvement in terms of security and usage. I try to solve these problems with the mobile application AnyCoin by bringing one platform that houses different types of digital wallets. The goal of this application was to grow a large user base and collect data off the transaction for future analysis and advertising. This study goes through an in depth analysis on the application from the iv perspective of merchants and consumers to understand what users are looking for in digital wallets. Decentralized platforms and crypto-currencies like Bitcoin have also created different ways to send money by creating a trustless system that does not depend on any central authority. I discuss what Bitcoin is and exactly how it works and the flaws in the current system. Mining is the process that puts Bitcoin into circulation and secures the network. However, as more customized hardware is released, Bitcoin will fall subject to becoming more centralized, and unfortunately become heavy regulated if it is to be used as a currency. Ethereum is a new technology that takes the concepts of Bitcoin and creates a platform for a developer to create a decentralized application. I create a few contracts that show how we can create a decentralized version of PayPal that works using other crypto- currencies. Ethereum is still in its alpha stage and has yet to be released to the public, but has already improved on the problems that Bitcoin and other crypto-currencies hold.
4

Toward General Purpose 3D User Interfaces: Extending Windowing Systems to Three Dimensions

Reiling, Forrest Freiling 01 June 2014 (has links) (PDF)
Recent growth in the commercial availability of consumer grade 3D user interface devices like the Microsoft Kinect and the Oculus Rift, coupled with the broad availability of high performance 3D graphics hardware, has put high quality 3D user interfaces firmly within the reach of consumer markets for the first time ever. However, these devices require custom integration with every application which wishes to use them, seriously limiting application support, and there is no established mechanism for multiple applications to use the same 3D interface hardware simultaneously. This thesis proposes that these problems can be solved in the same way that the same problems were solved for 2D interfaces: by abstracting the input hardware behind input primitives provided by the windowing system and compositing the output of applications within the windowing system before displaying it. To demonstrate the feasibility of this approach this thesis also presents a novel Wayland compositor which allows clients to create 3D interface contexts within a 3D interface space in the same way that traditional windowing systems allow applications to create 2D interface contexts (windows) within a 2D interface space (the desktop), as well as allowing unmodified 2D Wayland clients to window into the same 3D interface space and receive standard 2D input events. This implementation demonstrates the ability of consumer 3D interface hardware to support a 3D windowing system, the ability of this 3D windowing system to support applications with compelling 3D interfaces, the ability of this style of windowing system to be built on top of existing hardware accelerated graphics and windowing infrastructure, and the ability of such a windowing system to support unmodified 2D interface applications windowing into the same 3D windowing space as the 3D interface applications. This means that application developers could create compelling 3D interfaces with no knowledge of the hardware that supports them, that new hardware could be introduced without needing to integrate it with individual applications, and that users could mix whatever 2D and 3D applications they wish in an immersive 3D interface space regardless of the details of the underlying hardware.
5

USING SUFFIX ARRAYS FOR LEMPEL-ZIV DATA COMPRESSION

AL-HAFIDH, ANISA 08 1900 (has links)
<p>In the 1970s, Abraham Lempel and Jacob Ziv developed the first dictionary-based compression methods (LZ77 and LZ78). Their ideas have been a wellspring of inspiration to many researchers , who generalized, improved and combined them with run-length encoding (RLE) and statistical methods to form many commonly used lossless compression methods for text, image and sounds.</p> <p>The proposed methods factor a string x into substrings (factors) in such a way as to facilitate encoding the string into a compressed form (lossless text compression). This LZ factorization , as it is commonly called today, became a fundamental data structure in string processing, especially valuable for string compression. Recently, it found applications in computing various "regularities" in strings.</p> <p>The main principle of LZ methods is to use a portion of the previously seen input string as the dictionary. LZ77 and LZ78 encoders differ in two aspects. The first aspect is that LZ77 uses a sliding window unlike LZ78 which uses the entire string for building the dictionary. The use of a sliding window in LZ77 makes its decoder much simpler and faster than the LZ78 decoder. This implies that LZ77 is valuable in cases where a file is compressed once (or just a few times) and is decompressed often. A rarely used archive of compressed files is a superb example. The other aspect is the format of the codewords. LZ77 codewords consist of three parts: position, length and first non-matching symbol, while LZ78 removes the need for the length of the match in the codeword since it is implied in the dictionary.</p> <p>A whole family of algorithms has stemmed out of the original LZ algorithms (LZ77 and LZ78). This was a result of an effort to improve upon the LZ encoding algorithm in terms of speed and compression ratio. Some of these variants involved the use of sophisticated data structures (e.g. suffix trees, binary search trees, etc) to hold the dictionary in order to boost the search time. The problem with such data structures is that the amount of memory required is variable and cannot be known in advance. Furthermore, some of these data structures require a substantial amount of memory. LZ is the basis of the gzip (Unix) , winzip and pkzip compression techniques.</p> <p>In the testing for [1], we scaled up an LZSS implementation due to Haruhiko Okumura [37] so as to be useful for regularities (N = n , the length of the whole input string, and F equal to the full length of the unfactored suffix). We found that the binary tree approach becomes uncompetitive with algorithms that use the suffix array (SA) approach for the LZ factorization of the whole string. This observation triggered us to scale down the SA approach.</p> <p>The main contribution of this thesis is a novel LZ77 variant (LZAS) that uses a suffix array (SA) to perform the searches. The SA is augmented with a very simple and efficient algorithm that alternates between searching left and right in SA to find the longest match. Suffix arrays have gained the attention of researchers in recent years due to their simplicity and low memory requirements. They solve the sub-string problem as efficiently as suffix trees, using less memory. One notable advantage of using SA in an LZ encoder is that the amount of memory is independent of the text to be searched and can be defined a priori. The low and predictable memory requirement of this approach makes it suitable for memory-critical applications such as embedded systems. Moreover, our experiments show that the processing time per letter is almost stable and hence we can predict the processing time for a file given its size. Our proposed algorithm can additionally be used for forward/ backward substring search.</p> <p>In this thesis we investigate three variants of the LZAS algorithm. The first two of these variants (i.e. LZAS1 and LZAS2) use a dynamic suffix array DSA. DSA is a suffix array that can be updated whenever a letter or a factor is edited (i.e. deleted/ inserted/substituted by another letter or factor) in the original string. The suffix array of a sliding window changes whenever the window slides. Hence, we use the DSA to make sure that t he suffix array is up to date. T he DSA can be compressed using a sampling technique; therefore we decided to experiment with both sampled and non-sampled DSA. The third variant (i. e. LZAS3) re-computes the suffix array instead of updating it. We use an implementation of a suffix array construction algorithm (SACA) that requires supralinear time [28] but performs well in practice. We tested these variants against each other in terms of time and space. We further experimented with various window sizes and noticed that re-computing SA becomes better than updating it using DSA when the window size is small (i .e. hundreds of bytes compared to thousands of bytes).</p> / Master of Science (MS)
6

A novel lossless compression technique for text data

Blandon, Julio Cesar 22 November 1999 (has links)
The focus of this thesis is placed on text data compression based on the fundamental coding scheme referred to as the American Standard Code for Information Interchange or ASCII. The research objective is the development of software algorithms that result in significant compression of text data. Past and current compression techniques have been thoroughly reviewed to ensure proper contrast between the compression results of the proposed technique with those of existing ones. The research problem is based on the need to achieve higher compression of text files in order to save valuable memory space and increase the transmission rate of these text files. It was deemed necessary that the compression algorithm to be developed would have to be effective even for small files and be able to contend with uncommon words as they are dynamically included in the dictionary once they are encountered. A critical design aspect of this compression technique is its compatibility to existing compression techniques. In other words, the developed algorithm can be used in conjunction with existing techniques to yield even higher compression ratios. This thesis demonstrates such capabilities and such outcomes, and the research objective of achieving higher compression ratio is attained.
7

A Software Development Kit for Camera-Based Gesture Interaction

Cronin, Devlin 01 December 2013 (has links) (PDF)
Human-Computer Interaction is a rapidly expanding field, in which new implementations of ideas are consistently being released. In recent years, much of the concentration in this field has been on gesture-based control, either touch-based or camera-based. Even though camera-based gesture recognition was previously seen more in science fiction than in reality, this method of interaction is rising in popularity. There are a number of devices readily available to the average consumer that are designed to support this type of input, including the popular Microsoft Kinect and Leap Motion devices. Despite this rise in availability and popularity, development for these devices is currently an arduous task, unless only the most simple of gestures is required. The goal of this thesis is to develop a Software Development Kit (SDK) with which developers can more easily develop interfaces that utilize gesture-based control. If successful, this SDK could significantly reduce the amount of work (both in effort and in lines of code) necessary for a programmer to implement gesture control in an application. This, in turn, could help reduce the intellectual barrier which many face when attempting to implement a new interface. The developed SDK has three main goals. The SDK will place an emphasis on simplicity of code for developers using it; will allow for a variety of gestures, including gestures made by single or multiple trackable objects (e.g., hands and fingers), gestures performed in stages, and continuously-updating gestures; and will be device-agnostic, in that it will not be written exclusively for a single device. The thesis presents the results of a system validation study that suggests all of these goals have been met.
8

BLOCKCHAIN SCALABILITY AND SECURITY

Duong, Tuyet 01 January 2018 (has links)
Cryptocurrencies like Bitcoin have proven to be a phenomenal success. The underlying techniques hold huge promise to change the future of financial transactions, and eventually the way people and companies compute, collaborate, and interact. At the same time, the current Bitcoin-like proof-of-work based blockchain systems are facing many challenges. In more detail, a huge amount of energy/electricity is needed for maintaining the Bitcoin blockchain. In addition, their security holds if the majority of the computing power is under the control of honest players. However, this assumption has been seriously challenged recently and Bitcoin-like systems will fail when this assumption is broken. This research proposes novel blockchain designs to address the challenges. We first propose a novel blockchain protocol, called 2-hop blockchain, by combining proof-of-work and proof-of-stake mechanisms. That said, even if the adversary controls more than 50% computing power, the honest players still have the chance to defend the blockchain via honest stake. Then we revise and implement the design to obtain a practical cryptocurrency system called Twinscoin. In more detail, we introduce a new strategy for difficulty adjustment in the hybrid blockchain and provide an analysis of it. We also show how to construct a light client for proof-of-stake cryptocurrencies and evaluate the proposal practically. We implement our new design. Our implementation uses a recent modular development framework for blockchains, called Scorex. It allows us to change only certain parts of an application leaving other codebase intact.
9

A Novel Technique for CTIS Image-Reconstruction

Horton, Mitchel Dewayne 01 May 2010 (has links)
Computed tomography imaging spectrometer (CTIS) technology is introduced and its use is discussed. An iterative method is presented for CTIS image-reconstruction in the presence of both photon noise in the image and post-detection Gaussian system noise. The new algorithm assumes the transfer matrix of the system has a particular structure. Error analysis, performance evaluation, and parallelization of the algorithm is done. Complexity analysis is performed for the proof of concept code developed. Future work is discussed relating to potential improvements to the algorithm. An intuitive explanation for the success of the new algorithm is that it reformulates the image reconstruction problem as a constrained problem such that an explicit closed form solution can be computed when the constraint is ignored. Incorporating the constraint leads to an inverse matrix problem which can be dealt with using a conjugate gradient method. A weighted iterative refinement technique is employed because the conjugate gradient solver is terminated prematurely. This dissertation makes the following contributions to the state of the art. First, our method is several orders of magnitude faster that the previous industry best (multiplicative algebraic reconstruction technique (MART) and mixed-expectation reconstruction technique (MERT)). Second, error bounds are established. Third, open source proof of concept code is made available.
10

Lottery Scheduling in the Linux Kernel: A Closer Look

Zepp, David 01 June 2012 (has links) (PDF)
This paper presents an implementation of a lottery scheduler, presented from design through debugging to performance testing. Desirable characteristics of a general purpose scheduler include low overhead, good overall system performance for a variety of process types, and fair scheduling behavior. Testing is performed, along with an analysis of the results measuring the lottery scheduler against these characteristics. Lottery scheduling is found to provide better than average control over the relative execution rates of processes. The results show that lottery scheduling functions as a good mechanism for sharing the CPU fairly between users that are competing for the resource. While the lottery scheduler proves to have several interesting properties, overall system performance suffers and does not compare favorably with the balanced performance afforded by the standard Linux kernel’s scheduler.

Page generated in 0.1183 seconds