• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 10
  • 2
  • 1
  • Tagged with
  • 1325
  • 1313
  • 1312
  • 1312
  • 1312
  • 192
  • 164
  • 156
  • 129
  • 99
  • 93
  • 79
  • 52
  • 51
  • 51
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
431

Social niche construction : evolutionary explanations for cooperative group formation

Powers, Simon T. January 2010 (has links)
Cooperative behaviours can be defined as those that benefit others at an apparent cost to self. How these kinds of behaviours can evolve has been a topic of great interest in evolutionary biology, for at first sight we would not expect one organism to evolve to help another. Explanations for cooperation rely on the presence of a population structure that clusters cooperators together, such that they enjoy the benefits of each others' actions. But, the question that has been left largely unaddressed is, how does this structure itself evolve? If we want to really explain why organisms cooperate, then we need to explain not just their adaptation to their social environment, but why they live in that environment. It is well-known that individual genetic traits can affect population structure; an example is extracellular matrix production by bacteria in a biofilm. Yet, the concurrent evolution of such traits with social behaviour is very rarely considered. We show here that social behaviour can exert indirect selection pressure on population structure-modifying traits, causing individuals to adaptively modify their population structure to support greater cooperation. Moreover, we argue that any component of selection on structure modifying traits that is due to social behaviour must be in the direction of increased cooperation; that component of selection cannot be in favour of the conditions for greater selfishness. We then examine the conditions under which this component of selection on population structure exists. Thus, we argue that not only can population structure drive the evolution of cooperation, as in classical models, but that the benefits of greater cooperation can in turn drive the evolution of population structure - a positive feedback process that we call social niche construction. We argue that this process is necessary in providing an adaptive explanation for some of the major transitions in evolution (such as from single- to multi- celled organisms, and from solitary insects to eusocial colonies). Any satisfactory account of these transitions must explain how the individuals came to live in a population structure that supported high degrees of cooperation, as well as showing that cooperation is individually advantageous given that structure.
432

Nanostructured polymers : morphology and properties

Gherbaz, Gabriele January 2009 (has links)
This study is aimed to investigate the relationship between morphology and properties of non polar polymers in the presence of polar additives of different nature. The addi- tion of the physical gel dibenzylidene sorbitol (DBS) in a polyethylene (PE) blend has shown to act as a nucleation site on the polymer. Electron microscopy was used to reveal the fibrillar network formed by the DBS and its interaction with the PE. Moreover, the nucleation density in each material was obtained as a function of the crystallization temperature, which showed an increase in the number of nuclei in the clarified system compared to the unclarified one. However, this was found to be temperature dependent. The nucleation of PE on DBS was also studied through the induction time, which revealed a reduced surface energy of the polymer nucleus in the presence of the DBS. Space charge measurements were taken to investigate the charge transport in PE/DBS blends and the space charge at low concentration of the gelator was found to improve the space charge distribution. The same polyethylene blend has then been studied also upon addition of relatively polar ethylene/ vinyl acetate copolymers (EVA), with a VA content varying from 9 % to 40 %. Morphology studies showed that three main factors control the phase separation, namely the the time the blend is kept in the melt, the PE:EVA ratio and also the EVA molecular weight. However, breakdown testing demonstrated that the polarity of EVA decreased the breakdown strength of the blends, independently on the morphology. Finally, a preliminary study was conducted with EVA based nanocomposites to determine the effect of filler on the dielectric properties of the nanocomposite. Two relatively polar copolymers, EVA9 and EVA18, were processed by solution blending together with 5 % of o-MMT ( I30P and I44PA), and the time of solution blending was varied from 10 min to 100 min. X-ray scattering data showed intercalation in the case of EVA9 based anocomposites and potential exfoliation for EVA18 based nanocomposites. However, X-ray results suggest that the solution blending could extract a fraction of the organo-modified ions from in between the MMT galleries, leading to shrinkage of the clay spacing. The nanocomposite was also analysed from the point of view of its breakdown properties, which were shown to be unaffected by the presence of fillers.
433

Multiprocessing neural network simulator

Kulakov, Anton January 2013 (has links)
Over the last few years tremendous progress has been made in neuroscience by employing simulation tools for investigating neural network behaviour. Many simulators have been created during last few decades, and their number and set of features continually grows due to persistent interest from groups of researchers and engineers. A simulation software that is able to simulate a large-scale neural network has been developed and presented in this work. Based on a highly abstract integrate-and-fire neuron model a clock-driven sequential simulator has been developed in C++. The created program is able to associate the input patterns with the output patterns. The novel biologically plausible learning mechanism uses Long Term Potentiation and Long Term Depression to change the strength of the connections between the neurons based on a global binary feedback. Later, the sequentially executed model has been extended to a multi-processor system, which executes the described learning algorithm using the event-driven technique on a parallel distributed framework, simulating a neural network asynchronously. This allows the simulation to manage larger scale neural networks being immune to processor failure and communication problems. The multi-processor neural network simulator has been created, the main benefit of which is the possibility to simulate large scale neural networks using high-parallel distributed computing. For that reason the design of the simulator has been implemented considering an efficient weight-adjusting algorithm and an efficient way for asynchronous local communication between processors.
434

Awareness support for learning designers in collaborative authoring for adaptive learning

Nurjanah, Dade January 2013 (has links)
Adaptive learning systems offer students a range of appropriate learning options based on the learners’ characteristics. It is, therefore, necessary for such systems to maintain a hyperspace and knowledge space that consists of a large volume of domain and pedagogical knowledge, learner information, and adaptation rules. As a consequence, for a solitary teacher, developing learning resources would be time consuming and requires the teacher to be an expert of many topics. In this research, the problems of authoring adaptive learning resources are classified into issues concerning interoperability, efficiency, and collaboration. This research particularly addresses the question of how teachers can collaborate in authoring adaptive learning resources and be aware of what has happened in the authoring process. In order to experiment with collaboration, it was necessary to design a collaborative authoring environment for adaptive learning. This was achieved by extending an open sourced authoring tool of IMS Learning Design (IMS LD), ReCourse, to be a prototype of Collaborative ReCourse that includes the workspace awareness information features: Notes and History. It is designed as a tool for asynchronous collaboration for small groups of learning designers. IMS LD supports interoperability and adaptation. Two experiments were conducted. The first experiment was a workspace awareness study in which participants took part in an artificial collaborative scenario. They were divided into 2 groups; one group worked with ReCourse, the other with Collaborative ReCourse. The results provide evidence regarding the advantages of Notes and History for enhancing workspace awareness in collaborative authoring of learning designs.The second study tested the system more thoroughly as the participants had to work toward real goals over a much longer time frame. They were divided into four groups; two groups worked with ReCourse, while the others worked with Collaborative ReCourse. The experiment result showed that authoring of learning designs can be approached with a Process Structure method with implicit coordination and without role assignment. It also provides evidence that collaboration is possible for authoring IMS LD Level A for non-adapting and Level B for adapting materials. Notes and History assist in producing good quality output. This research has several contributions. From the literature study, it presents a comparison analysis of existing authoring tools, as well as learning standards. Furthermore, it presents a collaborative authoring approach for creating learning designs and describes the granularity level on which collaborative authoring for learning designs can be carried out. Finally, experiments using this approach show the advantages of having Notes and History for enhancing workspace awareness that and how they benefit the quality of learning designs.
435

Memory and functional unit design for vector microprocessors

Boettcher, Matthias January 2014 (has links)
Modern mobile devices employ SIMD datapaths to exploit small scale data-level parallelism to achieve the performance required to process a continuously growing number of computation intensive applications within a severely energy constrained environment. The introduction of advanced SIMD features expands the applicability of vector ISA extensions from media and signal processing algorithms to general purpose code. Considering the high memory bandwidth demands and the complexity of execution units associated with those features, this dissertation focuses on two main areas of investigation, the efficient handling of parallel memory accesses and the optimization of vector functional units. A key observation, obtained from simulation based analysis on the type and frequency of memory access patterns exhibited by general purpose workloads, is the tendency of consecutive memory references to access the same page. Exploiting this and further observations, Page-Based Memory Access Grouping enables a level one data cache interface to utilize single-ported TLBs and cache banks to achieve performance similar to multi-ported components, while consuming significantly less energy. Page-Based Way Determination extends the proposed scheme with TLB-coupled structures holding way information on recently accessed lines. These structures improve the energy efficiency of the vast majority of memory references by enabling them to bypass tag-arrays and directly target individual cache ways. A vector benchmarking environment - comprised of a flexible ISA extension, a parameterizable simulation framework and a corresponding benchmark suite - is developed and utilized in the second part of this thesis to facilitate investigations into the design aspects and potential performance benefits of advanced SIMD features. Based on it, a set of microarchitecture optimizations is introduced, including techniques to compute hardware interpretable masks for segmented operations, partition scans to allow specific energy - performance trade-offs, re-use existing multiplexers to process predicated and segmented vectors, accelerate scans on incomplete vectors, efficiently handle micro-ops fully comprised of predicated elements, and reference multiple physical registers within individual operands to improve the utilization of the vector register file.
436

Low-complexity near-optimum detection techniques for non-cooperative and cooperative MIMO systems

Wang, Li January 2010 (has links)
In this thesis, firstly we introduce various reduced-complexity near-optimum Sphere Detection (SD) algorithms, including the well-known depth-first SD, the K-best SD as well as the recently proposed Optimized Hierarchy Reduced Search Algorithm (OHRSA), followed by comparative studies of their applications, characteristics, performance and complexity in the context of uncoded non-cooperative Multiple-Input Multiple-Output (MIMO) systems using coherent detection. Particular attention is devoted to Spatial Division Multiple Accessing (SDMA) aided Orthogonal Frequency Division Multiplexing (OFDM) systems, which are considered to constitute a promising candidate for next-generation mobile communications. It is widely recognized that the conventional List SD (LSD) employed in channel-coded iterative detection aided systems may still impose a potentially excessive complexity, especially when it is applied to high-throughput scenarios employing high-order modulation schemes and/or supporting a high number of transmit antennas/users. Hence, in this treatise three complexity-reduction schemes are devised specifically for LSD-aided iterative receivers in the context of high-throughput channel-coded SDMA/OFDM systems in order to maintain a near-optimum performance at a reduced complexity. Explicitly, based on the exploitation of the soft-bit-information fed back by the channel decoder, the iterative center-shifting and Apriori-LLR-Threshold (ALT) schemes are contrived, which are capable of achieving a significant complexity reduction. Additionally, a powerful three-stage serially concatenated scheme is created by intrinsically amalgamating our proposed center-shifting-assisted SD with the decoder of a Unity-Rate-Code (URC). For the sake of achieving a near-capacity performance, Irregular Convolutional Codes (IrCCs) are used as the outer code for the proposed iterative center-shifting SD aided three-stage system. In order to attain extra coding gains along with transmit diversity gains for Multi-User MIMO (MU-MIMO) systems, where each user is equipped with multiple antennas, we contrive a multilayer tree-search based K-best SD scheme, which allows us to apply the Sphere Packing (SP) aided Space-Time Block Coding (STBC) scheme to the MU-MIMO scenarios, where a near Maximuma-Posteriori (MAP) performance is achieved at a low complexity. An alternative means of achieving transmit diversity while circumventing the cost and size constraints of implementing multiple antennas on a pocket-sized mobile device is cooperative diversity, which relies on antenna-sharing amongst multiple cooperating single-antenna-aided users. We design a realistic cooperative system, which operates without assuming the knowledge of the Channel State Information (CSI) at transceivers by employing differentially encoded modulation at the transmitter and non-coherent detection at the receiver. Furthermore, a newMultiple-Symbol Differential Sphere Detection (MSDSD) is contrived in order to render the cooperative system employing either the Differential Amplify-and-Forward (DAF) or the Differential Decode-and-Forward (DDF) protocol more robust to the detrimental channel-envelope fluctuations of high-velocity mobility environments. Additionally, for the sake of achieving the best possible performance, a resource optimized hybrid relaying scheme is proposed for exploiting the complementarity of the DAF- and DDF-aided systems. Finally, we investigate the benefits of introducing cooperative mechanisms into wireless networks from a pure channel capacity perspective and from the practical perspective of approaching the Discrete-input Continuous-output Memoryless Channel (DCMC) capacity of the cooperative network with the aid of our proposed Irregular Distributed Hybrid Concatenated Differential (Ir-DHCD) coding scheme.
437

Test and diagnosis of resistive bridges in multi-Vdd designs

Khursheed, Syed Saqib January 2010 (has links)
A key design constraint of circuits used in hand-held devices is the power consumption, mainly due to battery life limitations. Adaptive power management (APM) techniques aim to increase the battery life by adjusting the supply voltage (Vdd) and operating frequency, according to the workload. APM-enabled devices raise a number of challenges for existing manufacturing test and diagnosis techniques, as certain defects exhibit Vdd dependent detectability. This means that to achieve 100% fault coverage, APM-enabled devices should be tested at all operating voltages using repetitive tests. Repetitive tests at several Vdd settings are undesirable as it increases the cost of manufacturing test. This thesis provides two new and cost-effective Design for Test (DFT) techniques to avoid repetitive tests thereby reducing test cost. The first technique uses test point insertion (TPI) to reduce the number of test Vdd settings. TPI capitalizes on the observation that each resistive bridge defect consists of a large number of logic faults, including detectable and non-detectable logic faults. It targets resistive bridges requiring test at higher Vdd settings, and converts un-detectable logic faults at the lowest Vdd setting, into detectable logic faults by using test points. Test points provide additional controllability and observability at the fault site. TPI has shown encouraging results in terms of reducing the number of test Vdd settings, however it does not achieve single Vdd test for all designs. Taking this issue into account, another gate sizing (GS) based DFT technique is proposed. It targets bridges that require multi-Vdd test and increases the drive strength of gates driving such bridges. The number of test Vdd settings are reduced minimizing test cost. Experimental results show that for all designs, the proposed GS technique achieves 100% fault coverage at a single Vdd setting; in addition it has a lower overhead than the TPI in terms of timing, area and power. The Vdd dependent detectability of resistive bridges demands re-evaluation of existing diagnosis techniques, as all existing techniques use a single voltage setting for fault diagnosis, which may have a negative impact on diagnosis accuracy, affecting subsequent design cycle and yield. This thesis proposes a novel and cost-effective technique to improve diagnosis accuracy of resistive bridges in APM-enabled designs. It evaluates the impact of varying supply voltage on the accuracy of diagnosis and demonstrates how additional voltage settings can be leveraged to improve the diagnosis accuracy through a novel multi-voltage diagnosis algorithm. The diagnosis cost is reduced by identifying the most useful voltage settings and by eliminating tests at other voltages thereby achieving high diagnosis accuracy at reduced cost. All developed test and diagnosis techniques have been validated using simulations with ISCAS and ITC benchmarks, realistic fault models and actual bridges extracted from physical layouts.
438

Link integrity for the Semantic Web

Vesse, Robert January 2012 (has links)
The usefulness and usability of data on the Semantic Web is ultimately reliant on the ability of clients to retrieve Resource Description Framework (RDF) data from the Web. When RDF data is unavailable clients reliant on that data may either fail to function entirely or behave incorrectly. As a result there is a need to investigate and develop techniques that aim to ensure that some data is still retrievable, even in the event that the primary source of the data is unavailable. Since this problem is essentially the classic link integrity problem from hypermedia and the Web we look at the range of techniques that have been suggested by past research and attempt to adapt these to the Semantic Web. Having studied past research we identified two potentially promising strategies for solving the problem: 1) Replication and Preservation; and 2) Recovery. Using techniques developed to implement these strategies for hypermedia and the Web as a starting point we designed our own implementations which adapted these appropriately for the Semantic Web. We describe the design, implementation and evaluation of our adaptations before going on to discuss the implications of the usage of such techniques. In this research we show that such approaches can be used to successfully apply link integrity to the Semantic Web for a variety of datasets on the Semantic Web but that further research is needed before such solutions can be widely deployed.
439

An investigation into solid dielectrics

Kleemann, Tobias A. January 2012 (has links)
Direct measurement techniques for the investigation of electrical processes in solid dielectrics are reviewed and their respective strengths and weaknesses are discussed, particularly the complementary nature of thermally stimulated current measurements. The successful design and construction of a new Thermally Stimulated Discharge Current (TSDC) Spectrometer at the University of Southampton is presented and its correct function validated with experimental measurements of the well known and often characterized synthetic polymers low density polyethylene (LDPE) and polyethylene terephtalate (PET). Results were found to correspond well to published data. First TSDC observations of filled and oil impregnated papers are presented. The second aspect of this work is the investigation of natural polymer insulation materials,specifically paper for oil-paper insulation systems. For the first time, electrical insulation papers with filler contents up to 50% were investigated. Bentonite and talcum were compared as filler materials and found to have negative and positive effects respectively. The superior electrical strength of a talcum filled kraft paper was verified, and a series of constructive modifications was undertaken to further maximise its electrical strength at comparable or improved dielectric performance. An increase in electrical breakdown strength of 20% to 30% has been observed, but the substitution of such great amounts of fiber with fillers also lead to a reduction in mechanical strength of the paper. Further trials with chemical additives were conducted to counteract this effect and polyvinyl alcohol and starch were found to enhance the paper strength. Additional trials also comprised sizing agents, guar gum and wet strength agents. Uncharged or slightly charged chemical additives provided best results with regard to dielectric performance. The significance of the trialled paper modifications is judged in light of statistical analysis.
440

Algorithms for appliance usage prediction

Truong, Ngoc Cuong January 2014 (has links)
Demand-Side Management (DSM) is one of the key elements of future Smart Electricity Grids. DSM involves mechanisms to reduce or shift the consumption of electricity in an attempt to minimise peaks. By so doing it is possible to avoid using expensive peaking plants that are also highly carbon emitting. A key challenge in DSM, however, is the need to predict energy usage from specific home appliances accurately so that consumers can be notified to shift or reduce the use of high energy-consuming appliances. In some cases, such notifications may be also need to be given at very short notice. Hence, to solve the appliance usage prediction problem, in this thesis we develop novel algorithms that take into account both users' daily practices (by taking advantage of the cyclic nature of routine activities) and the inter-dependency between the usage of multiple appliances (i.e., the user's typical consumption patterns). We propose two prediction algorithms to satisfy the needs for fast prediction and high accuracy respectively: i) a rule-based approach, EGH-H, for scenarios in which notifications need to be given at short notice, to find significant patterns in the use of appliances that can capture the user's behaviour (or habits), ii) a graphical{model based approach, GM-PMA (Graphical Model for Prediction in Multiple Appliances) for scenarios that require high prediction accuracy. We demonstrate through extensive empirical evaluations on real{world data from a prominent database of home energy usage that GM-PMA outperforms existing methods by up to 41%, and the runtime of EGH-H is 100 times lower on average, than that of other benchmark algorithms, while maintaining competitive prediction accuracy. Moreover, we demonstrate the use of appliance usage prediction algorithms in the context of demand{side management by proposing an Intelligent Demand Responses (IDR) mechanism, where an agent uses Logistic Inference to learn the user's preferences, and hence provides the best personalised suggestions to the user. We use simulations to evaluate IDR on a number of user types, and show that, by using IDR, users are likely to improve their savings significantly.

Page generated in 0.0652 seconds