• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1
  • Tagged with
  • 36
  • 34
  • 8
  • 7
  • 7
  • 7
  • 7
  • 7
  • 7
  • 5
  • 5
  • 5
  • 5
  • 5
  • 5
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Young Malaysians' blogging habits and a linguistic analysis of their use of English in their weblogs

Ong, Lok Tik January 2016 (has links)
The size of the blogosphere has long been a contentious issue amongst people researching the social media as it cannot be accurately determined. Bodies (BlogPulse, BlogScope, Technorati, etc.) which used to track the growing phenomenon across the world were careful with their choice of words when reporting on its size, such as Sifry’s Technorati report which said, “On July 31, 2006, Technorati tracked its 50 millionth blog” (Sifry, 2006, August 6). However, as Rosenberg (2006, August 9) points out, “… it doesn’t really matter. There’s still a revolution going on.” This ‘revolution’ is dominated by young people and in Malaysia, it was found that 74% of the bloggers in Malaysia were below 25 years old (Windows Live Spaces, 2006) but there is limited study on the phenomenon of casual blogging amongst this age group in Malaysia and the use of English in the blogs. The current study contributes to this body of literature, drawing from works on blogging, linguistic analysis, identity, and varieties of English. It adopts the social-constructivist framework and postulates that blogging is a social action which causes the blogosphere to be in a state of constant revision where “individuals create their own subjective meanings of their experiences through interactions with each other and their surrounding environment” (Hartas, 2010:44). This study used mixed methods in order to answer the research questions using three instruments: survey, interview, and weblog analysis to yield the data needed to investigate the content and interactive blog communication of selected young Malaysian casual bloggers who blog in English. The survey data yielded information about their blogging habits and content; the interview data yielded information about their language learning endeavours which influenced their choice of language or varieties of language in their blogs; and the in-depth analysis of one blog yielded information on how language was used in the blog to achieve communicative intent. The findings reveal the blogging habits of young Malaysian bloggers, and how their attitude towards their identity as Malaysians using English and socio-cultural factors influence their choice of language and/or varieties of English in their blog communication. It discovers the unconventional manner of using an existing language to achieve communicative intent among those in the same blogospheric region. This study makes both the bloggers and their blog texts the focus of its research.
22

An investigation of the ant-based hyper-heuristic for capacitated vehicle routing problem and traveling salesman problem

Abd Aziz, Zalilah January 2013 (has links)
A brief observation on recent research of routing problems shows that most of the methods used to tackle the problems are using heuristics and metaheuristics; and they often use problem specific knowledge to build or improve solutions. In the last few years, research on hyper-heuristic has been investigated which aims to raise the generality of optimisation systems. This thesis is concerned with the investigation of ant-based hyper-heuristic. Ant algorithms have been applied to vehicle routing problems and have produced competitive results. Therefore, it is assumed that there is a reasonable possibility that ant-based hyperheuristic could perform well for the problem. The thesis first surveys the literature for some common solution methodologies for optimisation problems and explores in some detail the ant algorithms and ant algorithm hyperheuristic methods. Furthermore, the literature specifically concerns with routing problems; the capacitated routing problem (CVRP) and the travelling salesman problem (TSP). The thesis studies the ant system algorithm and further proposes the ant algorithm hyper-heuristic, which introduces a new pheromone update rule in order to improve its performance. The proposed approach, called the ant-based hyper-heuristic is tested to two routing problems; the CVRP and TSP. Although it does not produce any best known results, the experimental results have shown that it is competitive with other methods. Most importantly, it demonstrates how simple and easy to implement low level heuristics, with no extensive parameter tuning. Further analysis shows that the approach possesses learning mechanism when compared to random hyper-heuristic. The approach investigates the number of low level heuristics appropriate and found out that the more low level heuristics used, the better solution is generated. In addition an ACO hyper-heuristic which has two categories of pheromone updates is developed. However, ant-based hyper-heuristic performs better and this is inconsistent with the performance of ACO algorithm in the literature. In TSP, we utilise two different categories of low level heuristics, the TSP heuristics and the CVRP heuristics that were previously used for the CVRP. From the observation, it can be seen that by using any heuristics for the same class of problems, ant-based hyper-heuristic is seen to be able to produce competitive results. This has demonstrated that the ant-based hyper-heuristic is a reusable method. One major advantage of this work is the usage of the same parameter for all problem instances with simple moves and swap procedures. It is hoped that in the future, results obtained will be better than current results by using better intelligent low level heuristics.
23

Facilitating the development of location-based experiences

Oppermann, Leif January 2009 (has links)
Location-based experiences depend on the availability and reliability of wireless infrastructures such as GPS, Wi-Fi, or mobile phone networks; but these technologies are not universally available everywhere and anytime. Studies of deployed experiences have shown that the characteristics of wireless infrastructures, especially their limited coverage and accuracy, have a major impact on the performance of an experience. It is in the designers’ interest to be aware of technological restrictions to their work. Current state of the art authoring tools for location-based experiences implement one common overarching model: the idea of taking a map of the physical area in which the experience is to take place and then somehow placing virtual trigger zones on top of it. This model leaves no space for technological shortcomings and assumes a perfect registration between the real and the virtual. In order to increase the designers’ awareness of the technology, this thesis suggests revealing the wireless infrastructures at authoring time through appropriate tools and workflows. This is thought to aid the designers in better understanding the characteristics of the underlying technology and thereby enable them to deal with potential problems before their work is deployed to the public. This approach was studied in practice by working with two groups of professional artists who built two commercially commissioned location-based experiences, and evaluated using qualitative research methods. The first experience is a pervasive game for mobile phones called ‘Love City’ that relies on cellular positioning. The second experience is a pervasive game for cyclists called ‘Rider Spoke’ that relies on Wi-Fi positioning. The evaluation of these two experiences revealed the importance of an integrated suite of tools that spans indoors and outdoors, and which supports the designers in better understanding the location mechanism that they decided to work with. It was found that designers can successfully create their experiences to deal with patchy, coarse grained, and varying wireless networks as long as they are made aware of the characteristics.
24

High QoS and energy efficient medium access control protocols for wireless sensor networks

Khan, Bilal Muhammad January 2011 (has links)
Development of Wireless Sensor Nodes revolutionaries sensing and control application. The size of sensor node makes it ideal to be used in variety of applications, but this brings more challenges and problems especially as the capacity of onboard battery is limited. It is due to the very reason that initial research in the field of WSN especially on MAC targets mainly on the energy conservation and gives secondary importance towards other attributes of MAC protocols. These attributes includes latency, throughput, fairness and collision. This research keeping in view of current application requirements which demands QoS as well as energy conservation in static and mobile sensor networks proposes MAC protocols to meet these challenges. In this research to improve the efficiency of the collision resolution algorithms used in mainly contention based MAC protocols, an Improved Binary Exponential Backoff Algorithm is proposed. The main target of this protocol is to resolve the problem of access collision by employing interim backoff period. The protocol targets to improve upon the performance of conventional Binary Exponential Backoff Algorithm which suffers heavily from collision. The result shows significant reduction in collision which increases the efficiency of the network in terms of QoS and energy conservation. To eliminate the problem of collision which is one of the major sources of network performance degradation a novel Delay Controlled Collision Free contention based MAC is designed. The protocol uses novel delay allocation technique. DCCF also provides mechanism to achieve fairness among the nodes. Detailed analysis and comparative result shows substantial increase in throughput and decrease in latency as compared to Industrial standard of IEEE 802.15.4 CSMA/CA MAC. The research also proposed novel MAC protocols for mobile sensor networks. These protocols uses a methodology which is based upon signal strength of the beacon sent to the node from various neighbouring coordinators that enable the nodes to seamlessly enter from one cluster to another without any link loss and unnecessary delays in the shape of association. The proposed scheme is implemented over IEEE 802.15.4 enabling the standard to perform better with dynamic topology. Result shows that mobility adaptive 802.15.4 protocol shows improvement in QoS and conserve energy far better than the existing conventional CSMA/CA MAC standard. Also the algorithm is implemented over Delay Controlled Collision Free Mac protocol and a detail comparison is carried out with other mobility adaptive MAC protocols. The result shows significant decrease in latency as well as high gain in throughput and considerable reduction in energy as compared to the mobility adaptive MAC protocols. Finally in order to resolve fundamental problem of scalable network which suffers from bottleneck as more nodes in the last hop tries to send data towards the sink, a novel protocol is proposed which allows more than one node at a time to transmit the data towards the sink. The protocol named Simultaneous Multi node CSMA/CA enables the conventional industrial standard of IEEE 802.15.4 CSMA/CA protocol to allow more than one node to transmit the data towards the coordinator or sink node. The protocol out performs the existing standard and provides significant increase in QoS of the network.
25

Broadband electric field sensing and its application to material characterisation and nuclear quadrupole resonance

Mukherjee, Shrijit January 2012 (has links)
The aim of this project is to address the challenges associated with extending the radio frequency capability of Electric Potential Sensors to greater than 10 MHz. This has culminated in a single broadband sensor, with a frequency range of 200 Hz to greater than 200 MHz. The use of Electric Potential Sensors for the measurement of electric field with minimal perturbation has already been demonstrated at Sussex. These high impedance sensors have been successfully employed in measuring signals with frequencies in the range 1 mHz to 2 MHz. Many different versions of these sensors have been produced to cater for specific measurement requirements in a wide variety of experimental situations. From the point of view of this project, the relevant prior work is the acquisition of a 2 MHz electric field nuclear magnetic resonance signal, and the non-destructive testing of composite materials at audio frequency. Two very distinct electric field measurement scenarios are described which illustrate the diverse capabilities of the broadband sensor. Firstly, an electric field readout system for nuclear quadrupole resonance is demonstrated for the first time, with a sodium chlorate sample at a frequency of 30 MHz. Nuclear quadrupole resonance is an important technique with applications in the detection of explosives and narcotics. Unlike nuclear magnetic resonance a large magnet is not required, opening up the possibility of portable equipment. The electric field readout system is shown to be simpler than the conventional magnetic readout and may therefore contribute to the development of portable devices. Secondly, a broadband, high spatial resolution microscope system for materials characterisation with four different imaging modes is described. This includes; the surface topography of a conducting sample; the dielectric constant variation in glass/epoxy composite; the conductivity variation in a carbon fibre composite; and the electrode pixels inside a solid state CMOS fingerprint sensor.
26

High capacity multiuser multiantenna communication techniques

Al-Hussaibi, Walid Awad January 2011 (has links)
One of the main issues involved in the development of future wireless communication systems is the multiple access technique used to efficiently share the available spectrum among users. In rich multipath environment, spatial dimension can be exploited to meet the increasing number of users and their demands without consuming extra bandwidth and power. Therefore, it is utilized in the multiple-input multiple-output (MIMO) technology to increase the spectral efficiency significantly. However, multiuser MIMO (MU-MIMO) systems are still challenging to be widely adopted in next generation standards. In this thesis, new techniques are proposed to increase the channel and user capacity and improve the error performance of MU-MIMO over Rayleigh fading channel environment. For realistic system design and performance evaluation, channel correlation is considered as one of the main channel impurities due its severe influence on capacity and reliability. Two simple methods called generalized successive coloring technique (GSCT) and generalized iterative coloring technique (GICT) are proposed for accurate generation of correlated Rayleigh fading channels (CRFC). They are designed to overcome the shortcomings of existing methods by avoiding factorization of desired covariance matrix of the Gaussian samples. The superiority of these techniques is demonstrated by extensive simulations of different practical system scenarios. To mitigate the effects of channel correlations, a novel constellation constrained MU-MIMO (CC-MU-MIMO) scheme is proposed using transmit signal design and maximum likelihood joint detection (MLJD) at the receiver. It is designed to maximize the channel capacity and error performance based on principles of maximizing the minimum Euclidean distance (dmin) of composite received signals. Two signal design methods named as unequal power allocation (UPA) and rotation constellation (RC) are utilized to resolve the detection ambiguity caused by correlation. Extensive analysis and simulations demonstrate the effectiveness of considered scheme compared with conventional MU-MIMO. Furthermore, significant gain in SNR is achieved particularly in moderate to high correlations which have direct impact to maintain high user capacity. A new efficient receive antenna selection (RAS) technique referred to as phase difference based selection (PDBS) is proposed for single and multiuser MIMO systems to maximize the capacity over CRFC. It utilizes the received signal constellation to select the subset of antennas with highest (dmin) constellations due to its direct impact on the capacity and BER performance. A low complexity algorithm is designed by employing the Euclidean norm of channel matrix rows with their corresponding phase differences. Capacity analysis and simulation results show that PDBS outperforms norm based selection (NBS) and near to optimal selection (OS) for all correlation and SNR values. This technique provides fast RAS to capture most of the gains promised by multiantenna systems over different channel conditions. Finally, novel group layered MU-MIMO (GL-MU-MIMO) scheme is introduced to exploit the available spectrum for higher user capacity with affordable complexity. It takes the advantages of spatial difference among users and power control at base station to increase the number of users beyond the available number of RF chains. It is achieved by dividing the users into two groups according to their received power, high power group (HPG) and low power group (LPG). Different configurations of low complexity group layered multiuser detection (GL-MUD) and group power allocation ratio (η) are utilized to provide a valuable tradeoff between complexity and overall system performance. Furthermore, RAS diversity is incorporated by using NBS and a new selection algorithm called HPG-PDBS to increase the channel capacity and enhance the error performance. Extensive analysis and simulations demonstrate the superiority of proposed scheme compared with conventional MU-MIMO. By using appropriate value of (η), it shows higher sum rate capacity and substantial increase in the user capacity up to two-fold at target BER and SNR values.
27

Side information exploitation, quality control and low complexity implementation for distributed video coding

Zheng, Min January 2013 (has links)
Distributed video coding (DVC) is a new video coding methodology that shifts the highly complex motion search components from the encoder to the decoder, such a video coder would have a great advantage in encoding speed and it is still able to achieve similar rate-distortion performance as the conventional coding solutions. Applications include wireless video sensor networks, mobile video cameras and wireless video surveillance, etc. Although many progresses have been made in DVC over the past ten years, there is still a gap in RD performance between conventional video coding solutions and DVC. The latest development of DVC is still far from standardization and practical use. The key problems remain in the areas such as accurate and efficient side information generation and refinement, quality control between Wyner-Ziv frames and key frames, correlation noise modelling and decoder complexity, etc. Under this context, this thesis proposes solutions to improve the state-of-the-art side information refinement schemes, enable consistent quality control over decoded frames during coding process and implement highly efficient DVC codec. This thesis investigates the impact of reference frames on side information generation and reveals that reference frames have the potential to be better side information than the extensively used interpolated frames. Based on this investigation, we also propose a motion range prediction (MRP) method to exploit reference frames and precisely guide the statistical motion learning process. Extensive simulation results show that choosing reference frames as SI performs competitively, and sometimes even better than interpolated frames. Furthermore, the proposed MRP method is shown to significantly reduce the decoding complexity without degrading any RD performance. To minimize the block artifacts and achieve consistent improvement in both subjective and objective quality of side information, we propose a novel side information synthesis framework working on pixel granularity. We synthesize the SI at pixel level to minimize the block artifacts and adaptively change the correlation noise model according to the new SI. Furthermore, we have fully implemented a state-of-the-art DVC decoder with the proposed framework using serial and parallel processing technologies to identify bottlenecks and areas to further reduce the decoding complexity, which is another major challenge for future practical DVC system deployments. The performance is evaluated based on the latest transform domain DVC codec and compared with different standard codecs. Extensive experimental results show substantial and consistent rate-distortion gains over standard video codecs and significant speedup over serial implementation. In order to bring the state-of-the-art DVC one step closer to practical use, we address the problem of distortion variation introduced by typical rate control algorithms, especially in a variable bit rate environment. Simulation results show that the proposed quality control algorithm is capable to meet user defined target distortion and maintain a rather small variation for sequence with slow motion and performs similar to fixed quantization for fast motion sequence at the cost of some RD performance. Finally, we propose the first implementation of a distributed video encoder on a Texas Instruments TMS320DM6437 digital signal processor. The WZ encoder is efficiently implemented, using rate adaptive low-density-parity-check accumulative (LDPCA) codes, exploiting the hardware features and optimization techniques to improve the overall performance. Implementation results show that the WZ encoder is able to encode at 134M instruction cycles per QCIF frame on a TMS320DM6437 DSP running at 700MHz. This results in encoder speed 29 times faster than non-optimized encoder implementation. We also implemented a highly efficient DVC decoder using both serial and parallel technology based on a PC-HPC (high performance cluster) architecture, where the encoder is running in a general purpose PC and the decoder is running in a multicore HPC. The experimental results show that the parallelized decoder can achieve about 10 times speedup under various bit-rates and GOP sizes compared to the serial implementation and significant RD gains with regards to the state-of-the-art DISCOVER codec.
28

Optimisation of free space optical communication for satellite and terrestrial applications

Ituen, Iniabasi E. January 2017 (has links)
The future of global telecommunications looks even more promising with the advent of Free Space Optics (FSO) to complement Fibre Optics technology. With the main impairments to Free Space Optics known to be diffraction and atmospheric turbulence, it is critical to adequately characterise the atmospheric medium for effective FSO system design. Most laser sources can be designed to produce Gaussian-like beam profiles, which suffer from diffraction issues. To address this, a non-diffracting beam called the Bessel beam is introduced; its central core has been proven to be resistant to diffractive spreading whilst propagating. However, both Gaussian and Bessel beams will experience distortion when propagating through atmospheric turbulence. The strength of atmospheric turbulence Cn2 is considered constant for ground-to-ground (terrestrial) applications, but proven variable and gradually-weakening for ground-to-space (satellite) applications. In this research, we investigate the propagation of the two beams both in the ground-to-ground scenario and in the ground-to-space scenario. For the ground-to-space scenario, we define a maximum height of 22 km above which the effect of atmospheric turbulence is considered negligible. We also investigate the propagation of the beams from the ground, beyond the 22 km limit, into deep space. We analyse and compare the performance of the beams for all the scenarios based on predefined performance measures. The Bessel beam offers enhanced performance and is shown to outperform the Gaussian on a number of the performance measures.
29

Hardware realization of Discrete Wavelet Transform Cauchy Reed Solomon Minimal Instruction Set Computer architecture for Wireless Visual Sensor Networks

Ong, Jia Jan January 2016 (has links)
Large amount of image data transmitting across the Wireless Visual Sensor Networks (WVSNs) increases the data transmission rate thus increases the power transmission. This would inevitably decreases the operating lifespan of the sensor nodes and affecting the overall operation of WVSNs. Limiting power consumption to prolong battery lifespan is one of the most important goals in WVSNs. To achieve this goal, this thesis presents a novel low complexity Discrete Wavelet Transform (DWT) Cauchy Reed Solomon (CRS) Minimal Instruction Set Computer (MISC) architecture that performs data compression and data encoding (encryption) in a single architecture. There are four different programme instructions were developed to programme the MISC processor, which are Subtract and Branch if Negative (SBN), Galois Field Multiplier (GF MULT), XOR and 11TO8 instructions. With the use of these programme instructions, the developed DWT CRS MISC were programmed to perform DWT image compression to reduce the image size and then encode the DWT coefficients with CRS code to ensure data security and reliability. Both compression and CRS encoding were performed by a single architecture rather than in two separate modules which require a lot of hardware resources (logic slices). By reducing the number of logic slices, the power consumption can be subsequently reduced. Results show that the proposed new DWT CRS MISC architecture implementation requires 142 Slices (Xilinx Virtex-II), 129 slices (Xilinx Spartan-3E), 144 Slices (Xilinx Spartan-3L) and 66 Slices (Xilinx Spartan-6). The developed DWT CRS MISC architecture has lower hardware complexity as compared to other existing systems, such as Crypto-Processor in Xilinx Spartan-6 (4828 Slices), Low-Density Parity-Check in Xilinx Virtex-II (870 slices) and ECBC in Xilinx Spartan-3E (1691 Slices). With the use of RC10 development board, the developed DWT CRS MISC architecture can be implemented onto the Xilinx Spartan-3L FPGA to simulate an actual visual sensor node. This is to verify the feasibility of developing a joint compression, encryption and error correction processing framework in WVSNs.
30

Managing configuration history in domestic networks

Spencer, Robert January 2018 (has links)
Domestic Networks are gaining in complexity, with an increasing number and variety of devices. Increasing complexity results in greater difficulty managing configuration and troubleshooting when problems occur. This thesis presents strategies to assist users in managing the complexity of their networks. The work is split into several parts. First, configuration changes are tracked and users are presented with a timeline of changes to their network. Provision of a selective undo system is the second feature. The undo facility is designed to allow any change to be undone independently of any other. Users are also given the option of reverting to an earlier point, either before a specific change, or to a specific timestamp. The next feature is use of notifications. Any changes that require further actions can be broadcast to users directly. Changing Wi-Fi configuration is one example. The range of devices in use makes changing Wi-Fi configuration (and the subsequent reconfiguration of devices) a challenge, because the devices affected may be part of the infrastructure of a home (lights or thermostat for example). Because these devices have unique methods of network setup, restoring connectivity to every device can be challenging. This thesis also presents a method of changing Wi-Fi configuration which allows users a grace period to reconnect all their devices. Each of these features was assessed by a user study, the results of which are also discussed.

Page generated in 0.0192 seconds