• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 66
  • 54
  • 3
  • 1
  • 1
  • 1
  • Tagged with
  • 353
  • 19
  • 15
  • 14
  • 12
  • 12
  • 12
  • 12
  • 11
  • 11
  • 11
  • 10
  • 10
  • 10
  • 10
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

The elastic-plastic straining of some shells of revolution with special reference to expansion bellows

Marcal, Pedro Vincente Fernandes do Couto January 1963 (has links)
This thesis is intended as a contribution to the problem of limited life design of symmetrically loaded shells of revolution. The basic equations governing the behaviour of, a shell of revolution were differentiated implicitly in order to provide a basis for the non-linear numerical analysis of shells of revolution. Incremental generalized stress-strain relations were then introduced which allowed nonlinear stress-strain laws to be taken into account. A later analysis showed that a generalization of the incremental stress-strain relations would enable the Prandtl Reuss equations to be used in writing the three dimensional equations of equilibrium in terms of strains. A computer programme based on the derived shell equations was developed which calculated the elastic-plastic strains in axially loaded bellows. The bellows were assumed to be made of an elastic-perfectly plastic Tresca material of the Reuss type. A method of estimating strains after collapse in beams was adapted for the case of the axially loaded bellows. Calculated and experimental results agreed for the one bellows tested beyond collapse. Tests were performed on four axially loaded bellows to obtain the relationship between maximum strains, load and deflection in the plastic region. The experimental and calculated results were found to be in resonable agreement when they were compared non-dimensionally. The bellows were finally tested in fatigue and the results agreed with the results of fatigue tests on. samples of bellows material.
22

Development of a dispenser printer to realise electroluminescent lamps on fabrics

De Vos, Marc January 2016 (has links)
Dispenser printed electroluminescent (EL) lamps on fabric have been investigated and a functional dispenser printed prototype that outperforms the commercial equivalent is presented. The state of the art in dispenser printing and screen-printing is considered and existing patents and prototype devices reviewed. No examples of printed electroluminescent lamps on fabric were found therefore providing an opportunity for novel work to be produced. A simulation model of the dispenser printer was developed, allowing the best performing dispense pressure for a defined ink to be found. An experimental method for identifying a suitable dispense pressure was also compared to the simulation. The results showed that the experimental method offered the same results more quickly, so an experimental approach was selected. The ability to print complex shapes was developed, enhancing the usefulness of the technology for a commercial application. A requirement for maintaining a constant separation between the substrate and the dispenser nozzle was discovered and this capability was integrated to allow the printing of complex shapes in a continuous print mode. Many displacement sensors were reviewed and a selection were tested before the Keyence LK-G10 was selected. The results from the Keyence profiles were validated against a commercial profilometer and the University of Southampton systen was found to fail to measure the transparent conducting layer although it could measure the transparent interface which the other displacement sensors could not. Phosphor and transparent conducting layers were investigated and optimised solutions for different use scenarios such as different lamp colours are identified. The route the dispenser nozzle takes to fill a defined area with ink was investigated. The results recommended the rectilinear pattern at 70% density as the best performer out of five infill patterns tested. Novel dispenser printed EL lamps on fabric were demonstrated. The dispenser printed EL lamps were compared to University of Southampton screen-printed and commercial screen-printed EL lamps. The results showed the dispenser printed EL lamps outperformed the commercial lamps and had comparable performance to the University of Southampton screen-printed EL lamps.
23

Design and optimisation of user-centric visible-light networks

Li, Xuan January 2016 (has links)
In order to counteract the explosive escalation of wireless tele-traffic, the communication spectrum has been gradually expanded from the conventional radio frequency (RF) band to the optical domain. By integrating the RF band relying on diverse radio techniques and optical bands, the next-generation heterogeneous networks (HetNets) are expected to be a potential solution for supporting the ever-increasing wireless tele-traffic. Owing to its abundant unlicensed spectral resources, visible light communications (VLC) combined with advanced illumination constitute a competent candidate for complementing the existing RF networks. Although the advantages of VLC are multi-fold, some challenges arise when incorporating VLC into the classic RF HetNet environments, which may require new system architectures. This motivates our research on the system design of user-centric (UC) VLC. Our investigations are focused on system-level design of VLC and it is constituted by three major aspects, namely 1) by the cooperative load balancing (LB) in hybrid VLC and wireless local area network (WLAN) as discussed in Chapter 2; 2) by the UC cluster formation and multiuser scheduling (MUS) of Chapter 3; 3) as well as by the energy-efficient scalable video streaming design example of Chapter 4. Explicitly, we first study VLC as a complementary extension of the existing WLAN. In Chapter 2 we study various conventional cell formations invoked for networks in order to tackle the significant inter-cell interference (ICI) problem, including the traditional unity/nonunity frequency reuse (FR) techniques as well as the advanced combined transmission (CT) and vectored transmission (VT) schemes. Then a distributed LB algorithm is proposed for a hybrid VLC and WLAN network, which is then evaluated from various perspectives. In order to further mitigate the ICI in VLC networks, we focus our attention on novel UC-VLC cluster formation techniques in Chapter 3 and Chapter 4. The concept of UC cluster formation is a counterpart of the conventional network-centric (NC) cell formation, which is dynamically constructed according to the users’ location. Relying on graph theory, the joint cluster formation and MUS problem is solved in Chapter 3. Furthermore, another important optimisation aspect in most wireless networks is the achievable energy efficiency (EE). Hence, we design an energy-efficient scalable video streaming scheme for our UC-VLC network, which achieves superior performance compared to the NC cells in terms of its throughput attained, EE as well as the quality of service (QoS).
24

A joint algorithm and architecture design approach to joint source and channel coding schemes

Brejza, Matthew January 2016 (has links)
Shannon's separate source and channel coding theorem suggests that communication of throughputs closely approaching the capacity of the channel can be achieved using an near-entropy source code to compress a source by removing all redundancy, combined with a near-capacity channel code which adds redundancy to increase the resilience to transmission errors. However, in practice, Separate Source and Channel Coding (SSCC) can impose excessive delay and complexity, or cannot tolerate any transmission errors without causing an endless cascade of errors. This motives Joint Source and Channel Coding (JSCC), where the residual redundancy from a non-optimal source code is used by the channel code in order to increase the error correction capability. In particular, the recently proposed Unary Error Correction (UEC) code is an example of a JSCC scheme, which is well suited to encoding symbols generated during multimedia transmission, such as a H.264 or H.265 video encoder. Despite this, the UEC is only suitable for encoding symbols that are generated according to a limited range of probability distributions. Furthermore, due to their computational complexity, iterative decoder components such as source and channel decoders are usually implemented using specialized dedicated hardware. Despite this, there is little work in the open literature on the hardware implementation of JSCC schemes. Against this background, this thesis jointly considers the algorithm and architecture design of joint source and channel codes for the rst time, in order to achieve an increased error correction performance and an improved hardware effciency. This thesis begins by proposing improvements to the UEC JSCC scheme. Firstly, an adaptive activation order algorithm is extended for use with a more complex UEC scheme, which comprises four iterative components, including a demodulator. This adaptive activation order algorithm facilitates an improved error correction performance, using a reduced number of iterations. Following this, in order to increase the applicability of the UEC code, it is extended and generalized to obtain the novel RiceEC and ExpGEC codes. These codes can be applied to any arbitrary unbound monotonic symbol distribution, including the symbols produced by the H.265 video codec and the letters of the English alphabet. Furthermore, the practicality of the proposed codes is enhanced to allow a continuous stream of symbol values to be encoded and decoded using only fixed-length system components. This thesis also provides the first hardware implementations of UEC schemes. Owing to their relatively high complexity, many capacity-approaching techniques proposed in the literature have not yet been invoked in Wireless Sensor Network (WSN) applications, despite their potential benefits of facilitating a reduced transmission power or extended communication range. Against this background, this thesis proposes an energy-efficient architecture comprised of multiple Calculation Units (CUs), which is sufficiently flexible for accommodating different iterative decoder components of a UEC-based JSCC scheme, using the same hardware. This architecture achieves a throughput suitable for low-speed video applications, while achieving high hardware utilisation, which is important in cost- and energy-sensitive applications. Following this, a UEC scheme is implemented for very high throughput applications, by extending the philosophy of the Fully Parallel Turbo Decoder (FPTD). More specifically, in the wireless transmission of multimedia information, the achievable transmission throughput and latency may be limited by the processing throughput and latency associated with source and channel coding. For example, ultra-high throughput and ultra-low latency processing of source and channel coding is required by the emerging new video transmission applications, such as the first-person remote control of unmanned vehicles. Here, a new architecture is developed by jointly considering the algorithm and hardware implementation, in order to achieve an improved hardware efficiency, high throughput, and low latency. This thesis will demonstrate the application of these improvements to both the LTE turbo code and the UEC code, where the proposed design achieves a throughput of 450 Mbps on a mid-range FPGA, as well as a factor of 2.4 hardware efficiency improvement over previous implementations of the FPTD.
25

Control of upper-limb functional neuromuscular electrical stimulation

Lane, Rodney January 2016 (has links)
Functional electrical stimulation (FES) is the name given for the use of neuromuscular electrical stimulation to achieve patterns of induced movement which are of functional benefit to the user. System are available that use FES to aid persons who have suffered an insult to the motor control region of the brain and been left with movement impairment. The aim of this research was to investigate methods of providing an FES system that could have a beneficial effect in restoring arm function. The techniques for applying upper-limb stimulation are well established, however the methods of controlling it to provide functional use remain lacking. This is because upper-limb movement can be difficult to measure and quantify as the starting point for any movement may not be well defined. Moreover the movements needed to complete a useful function such as reaching and grasping requires the coordinated control of a number of muscle groups, and that relies on being able to track the position of the limb. Effective control of FES for the arm requires reliable feedback about the position and state of the limb. Electromyograms (EMG) are a measure of the very small electrical signals that are emitted whenever a muscle is ‘fired’ to move. EMG can be used to detect muscle activity and so can be a useful feedback control input. It does however have a number of drawbacks that this research sought to address by combining the method with external motion sensors. The intension had been to use the motion sensors to track the position of the limb and then use the EMG measurements to detect the wearer’s movements. FES could then be used to assist the wearer in making a desired movement. Initial studies were done to separately investigate the motion sensing and the EMG measurement components of the system. However before these could be combined a more interesting observation was made relating to bioimpedance. A study of bioimpedance measurements found a relationship between tissue impedance changes and muscle activity. Different methods for measuring bioimpedance where investigated and the results compared, before a practical technique for capturing measurements was developed and demonstrated. A new set of test equipment was made using these finding. Subsequent results using this equipment were able to demonstrate that bioimpedance measurement could be taken from a limb while FES was being used, and that these measurement could be used as a feedback signal to control the FES to maintain a target limb position. This work forms the basis of a novel approach to the control of FES that uses feedback from the user’s limb to determine the position of the limb in free space without need for additional sensors.
26

Characterisation of imperfections in hollow core photonic bandgap fibres

Sandoghchi, Seyed Reza January 2016 (has links)
Over the past decades, the performance of standard single-mode fibre (SSMF) has improved to the point that limited scope now exists for significant further reductions in loss and nonlinearity, which determine the fibre’s transmission capacity. Given the current 40% per annum growth in data traffic, and the fact that state-of-the-art data transmission experiments are operating close to the fundamental information-carrying limit of SSMF, there is strong interest in developing radically new fibres capable of much higher data carrying capacity. Recently, a potentially disruptive new type of fibre, the hollow-core (HC) photonic bandgap fibre (PBGF), has emerged as a credible candidate. It guides light predominantly (i.e. ~99%) in air, providing a unique set of optical properties such as ultralow nonlinearity, ultimate low signal latency, and the potential for lower loss compared to SSMF. However, to enable the application of HC-PBGFs for data transmission, the fabrication of long lengths of uniform, low-loss HC-PBGFs is essential, which had not been possible until recently. Despite empirical observations of high loss section along HCPBGFs and of more frequent fibre breaks than in conventional fibres were known, very little was known about the root cause of these issues at the start of this PhD project. The investigation of the problems preventing the fabrication of long length of uniform, defect free HC-PBGF is the topic of this thesis. I developed and/or applied a suite of characterisation methods, such as IR side-scatter imaging, X-ray tomographic analysis and optical side scattering radiometry, aimed at identifying defects and imperfections that arise in HC-PBGFs. Through these techniques, I studied the morphology and longitudinal evolution (e.g. formation, stabilisation and decay) of such defects, the first systematic study of this kind for HC-PBGFs. Furthermore I could backtrack their origin to well defined stages in the fabrication (e.g. preforms and canes). My observations suggest that all or at least most defects arise due to contamination or stacking errors, which are unintentionally introduced when the HC-PBGF preforms are assembled from arrays of glass capillaries. Ultimately, the methods I have developed and the findings described in this PhD Thesis helped develop ways to greatly reduce (and hopefully, in future, completely eliminate) these defects, which resulted in several breakthroughs including the achievement of the current record length of low loss HC-PBGF, i.e., a 11km long fibre with a uniform 5.2dB/km loss and more than 200nm transmission bandwidth, a factor of 10 longer length than what had been reported before the start of this project.
27

Enhancing e-learning by integrating knowledge management and social media

Alkinani, Edrees January 2016 (has links)
Knowledge management (KM), e-learning, and social media are three academic research fields with several concepts in common. In e-learning, a competence is what students will be able to do by the end of the course. A competence consists of, at least, a capability verb and subject matter content. Students undertaking e-learning are usually given an assignment or task that corresponds to a competence. Thus, a learning task involves a number of competences, and achieving them results in completing the task successfully. Providing students with an ontological structure of the learning task could improve their learning, and the ontological structure of the task competences is considered to be a KM technique. The ontology will be constructed from various categories of subject matter that are linked to related competences. The learning task or assignment could be achieved collaboratively through introducing additional components to a teaching and learning situation. These are called collaborative working competences, and will be developed ontologically. They serve as the capability for collaboration in a social media context. Students can study both what is required for a given task or assignment and what is required for group working, then form and manage their own groups. There will be an application using a social media tool, such as Facebook, to support collaborative working on assignments and facilitate group formation. Thus, the integration of KM and social media may be expected to enhance e-learning. The student needs to undertake a number of activities, such as establishing the task competences, the collaborative working competences and the student skills, then selecting group members based on the available competences. Through these activities, a research conjecture is set up: if competences, including collaborative working, are ontologically structured through a social media application, then the students would achieve better learning results. More significantly, the research aims to contribute its Social Media Competence (SMC) application in order to provide students with ontologically structured competences to accomplish given assignments successfully. The SMC application supports better collaborative working and more professional group formation procedures by generating content from Facebook users’ profiles to compare their competences. The methods of constructing ontologies for task competences and collaborative working are described, and the design and implementation of the SMC application are presented. The application is integrated with a Facebook group in order to support collaboration. Besides exploring competences, students can contribute to the knowledge base by adding their own resources to a specific competence in a structured and organised manner. Experiments were carried out to evaluate the SMC application by asking students to answer given assignments collaboratively under two conditions: using the SMC application (app users) and using paper-based documentation (non-app users) in plain text format. In addition, students’ satisfaction was measured in a number of dimensions, such as evaluating ontologically structured competences and the actual groups’ collaborative working. The results revealed that the SMC application both enables groups of students to obtain higher results in given assignments and supports better collaborative working among them. In addition, the groups of students who used the SMC application were positively satisfied with its content. The ontologically structured competences were applied to teaching in order to improve teaching performance. The opinions of instructors were measured. The results showed that instructors would teach better by using ontologically structured competences in their teaching. Furthermore, the teaching and learning environment was investigated by using Facebook Groups. The opinions of instructors were measured in order to confirm whether it could enhance the teaching and learning environment through creating groups for module activities. The results revealed that instructors’ opinions positively confirmed that doing so would enhance the teaching and learning environment. This research contributes to the tasks of both learning and collaboration competences that are ontologically structured. The ontological design for learning tasks was based on subject matter categories, and enables students to achieve better learning results in a given assignment. This reflects the effective design of the ontology in terms of structuring classes, and analysing the competences and subject matter to be fitted into the ontology in an appropriate presentation. In addition, the research contributes ontologically structured competences for collaborative working. The ontology design for collaborative working was based on commonly available social skills, with related roles of performing each skill. It enables students to conduct their group work or a given assignment more effectively when using Facebook as a social media tool. Moreover, the research contributes to better group formation by reading the social information, such as professional skills, from students’ Facebook profiles in order to compare their competences in both the learning task and group work. Furthermore, the research contributes to teaching by applying ontologically structured competences to improve teaching performance. Finally, the research contributes to enhancing the teaching and learning environment by setting up Facebook groups for module activities.
28

Polarization sensitive ultrafast laser material processing

Zhang, Jingyu January 2016 (has links)
In this thesis, I will concentrate on ultrafast laser interactions with various materials such as fused silica, crystalline silicon, amorphous silicon and nonlinear crystal. The first polarization sensitive ultrafast laser material interaction to be illustrated was second harmonic generation in lithium niobate by tightly focused cylindrical vector beams. The generated second harmonic patterns were experimentally demonstrated and theoretically explained. Existence of the longitudinal component of the fundamental light field was proven. The same beams were used for modifying fused silica glass. Distribution of the electric field in the focal region was visualized by the presence of self-assembled nanogratings. Also in this experiment, crystalline and amorphous silicon were modified by the focused cylindrical vector beams. The generated modifications matched well with the theoretical simulations. Polarization dependent structure was not observed under single pulse irradiation above the silicon surface. The generated isotropic crater structures with their smooth surface can be implemented as a wavefront sensor. Unexpectedly, an entirely different modification was observed after the double pulse laser irradiation. The size and orientation of the structure can be independently manipulated by the energy of the first pulse and polarization of the second pulse. Theoretical analysis was conducted and the formation mechanism of the polarization dependent structures was explained. This structure on silicon surface can be used for the polarization-multiplexed optical memory. One type of polarization sensitive ultrafast laser modification in fused silica is nanogratings. This modification exhibits form birefringence and therefore can be implemented for multi-dimensional optical data storage. Optimized data recording parameters were determined by sets of experiments. Stress-induced birefringence was observed and explained by material expansion at different conditions. Finally, the multilevel encoding of polarization and intensity states of light with self-assembled nanostructures was illustrated. A new writing setup was designed and involved a spatial light modulator, a half-wave plate matrix and a 4F optical system. The data recording rate was increased by 2 orders of magnitude compared to conventional laser direct writing setup using polarization optics. The recording and readout of digital information was experimental demonstrated. We successfully recorded across three layers a digital copy of a 310KB file. The benefits of 5D optical data storage, such as long lifetime and high capacity were illustrated. In addition, the theoretical limitations of the current writing system and readout system were discussed and several upgraded systems were proposed.
29

Unary error correction coding

Zhang, Wenbo January 2016 (has links)
In this thesis, we introduce the novel concept of Unary Error Correction (UEC) coding. Our UEC code is a Joint Source and Channel Coding (JSCC) scheme conceived for performing both the compression and error correction of multimedia information during its transmission from an encoder to a decoder. The UEC encoder generates a bit sequence by concatenating and encoding unary codewords, while the decoder operates on the basis of a trellis that has only a modest complexity, even when the source symbol values are selected from a set having an infinite cardinality, such as the set of all positive integers. This trellis is designed so that the transitions between its states are synchronous with the transitions between the consecutive unary codewords in the concatenated bit sequence. This allows the UEC decoder to exploit any residual redundancy that remains following UEC encoding for the purpose of error correction by using the classic Bahl, Cocke, Jelinek and Raviv (BCJR) algorithm. Owing to this, the UEC code is capable of mitigating any potential capacity loss, hence facilitating near-capacity operation, even when the cardinality of the symbol value set is infinite. We investigate the applications, characteristics and performance of the UEC code in the context of digital telecommunications. Firstly, we propose an adaptive UEC design for expediting the decoding process. By concatenating the UEC code with a turbo code, we conceive a three-stage concatenated adaptive iterative decoding technique. A Three-Dimensional (3D) EXtrinsic Information Transfer (EXIT) chart technique is proposed for both controlling the dynamic adaptation of the UEC trellis decoder, as well as for controlling the activation order between the UEC decoder and the turbo decoder. Secondly, we develop an irregular UEC design for ‘nearer-capacity’ operation. The irregular scheme employs different UEC parametrizations for the coding of different subsets of each message frame, operating on the basis of a single irregular trellis having a novel structure. This allows the irregularity to be controlled on a fine-grained bit-by-bit basis, rather than on a symbol-by-symbol basis. Hence, nearer-to-capacity operation is facilitated by exploiting this fine-grained control of the irregularity. Thirdly, we propose a learning-aided UEC design for transmitting symbol values selected from unknown and non-stationary probability distributions. The learning-aided UEC scheme is capable of heuristically inferring the source symbol distribution, hence eliminating the requirement of any prior knowledge of the symbol occurrence probabilities at either the transmitter or the receiver. This is achieved by inferring the source distribution based on the received symbols and by feeding this information back to the decoder. In this way, the quality of the recovered symbols and the estimate of the source distribution can be gradually improved in successive frames, hence allowing reliable near-capacity operation to be achieved, even if the source is unknown and non-stationary. Finally, we demonstrate that the research illustrated in this thesis can be extended in several directions, by highlighting a number of opportunities for future work. The techniques proposed for enhancing the UEC code can be extended to the Rice Error Correction (RiceEC) code, to the Elias Gamma Error Correction (EGEC) code and to the Exponential Golomb Error Correction (ExpGEC) code. In this way, our UEC scheme may be extended to the family of universal error correction codes, which facilitate the nearcapacity transmission of infinite-cardinality symbol alphabets having any arbitrary monotonic probability distribution, as well as providing a wider range of applications. With these benefits, this thesis may contribute to future standards for the reliable near-capacity transmission of multimedia information, having significant technical and economic impact.
30

Human mobility models and routing protocols for mobile social networks

Pholpabu, Pitiphol January 2016 (has links)
In mobile social networks (MSNs), there are a number of challenges to face, which include the design of well-performed routing protocols. Since performance evaluation of a routing protocol in real world is costly and time consuming, it is usually more practical to evaluate the performance relying on the syn-thetic data generated by simulations. Accordingly, a number of mobility models have been proposed to provide the real-trace-like scenarios that can be used in development and performance evaluation of routing protocols. In this thesis, we concern the problem of dynamic routing in MSNs. First, two mobility models are proposed, which are the Preferred-Community-Aware Mobility (PCAM) and Role Playing Mobility (RPM) models. While designed based on thesimplicity and randomness of the Random WayPoint (RWP) mobility model, the PCAM model enhances it by exploiting the fact that people often have their favorite places to visit, resulting in the so-called human social behavior of mutual preferred communities. On the other hand, the RPM model further improves the PCAM model by jointly considering people’s behaviour of mutual-preferred- communities and their daily schedules. Then, based on the mobility models proposed, we design some routing protocols and investigate the effect of human social behavior on the routing performance in MSNs. Firstly, a Social Contact Probability assisted Routing (SCPR) protocol is proposed, which is capable of exploiting the properties of encounters between mobile nodes (MNs) and the relationship strength between MNs and communities. Secondly, considering that the energy-efficiency issues have not been addressed in the existing routing protocols for MSNs, we propose two types of energy-concerned routing protocols, which are the Energy-Concerned Routing (EnCoR) and Energy-efficient Social Distance Routing (ESDR) protocols. It can be shown that these two types of routing protocols are capable of achieving the trade-off among Energy Consumption (EC), Delivery Ratio (DR), and Delay (D), shortened as the EC/DR/D trade-off. Specifically, the EnCoR protocol controls the EC/DR/D trade-off by a threshold introduced to the route selection process, while the ESDR protocol achieves the EC/DR/D trade-off by taking into account of the length of messages and the estimation of the number of hops. Furthermore, we show that, by invoking our EnCoR or ESDR scheme, most existing routing protocols can be readily extended to their corresponding versions that are flexible to achieve the EC/DR/D trade-off. In this thesis, the performance of the proposed routing protocols are investigated with the aid of simulations, which are compared with a range of existing routing protocols for MSNs. Our studies and performance results show that the proposed protocols are capable of efficiently integrating the merits of high delivery ratio, low delivery latency and low resource consumption of the existing protocols, while simultaneously circumventing their respective shortcomings. Our proposed protocols are capable of attaining a good trade-off among the delivery ratio, average delivery delay, and the cost of resources (including energy) for operation of the protocols.

Page generated in 0.0276 seconds