• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 2616
  • 940
  • 381
  • 347
  • 331
  • 101
  • 66
  • 49
  • 40
  • 36
  • 34
  • 32
  • 31
  • 27
  • 26
  • Tagged with
  • 5988
  • 1458
  • 888
  • 730
  • 724
  • 703
  • 493
  • 493
  • 482
  • 451
  • 421
  • 414
  • 386
  • 366
  • 342
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1371

Virtual living organism : a rapid prototyping tool to emulate biology

Bándi, Gergely January 2011 (has links)
Rapid prototyping tools exist in many fields of science and engineering, but are rare in biology especially not general tools that can handle the diversity and complexity of the many spatial and temporal scales in nature. In this thesis a general use, cell-based, middle-out biology emulation programming framework (outlining a programming paradigm) is presented, that enables biologists to emulate and use virtual biological systems of previously unimaginable complexity and potentially get results accurate enough to be used in research and ultimately, in clinical practice, such as diagnosis or operations. With this technology, virtual organisms can be created that are viable, fit and can be optimised for any task that arises. The tool, realised with a programming framework created for the C++ language is detailed and demonstrated through several examples of increasing complexity, namely several example organisms and a cancer emulation, showing both viable virtual organisms and usable experimental results.
1372

Performance analysis for network coding using ant colony routing

Sabri, Dalia January 2011 (has links)
The aim of this thesis is to conduct performance investigation of a combined system of Network Coding (NC) technique with Ant-Colony (ACO) routing protocol. This research analyses the impact of several workload characteristics, on system performance. Network coding is a significant key development of information transmission and processing. Network coding enhances the performance of multicast by employing encoding operations at intermediate nodes. Two steps should realize while using network coding in multicast communication: determining appropriate transmission paths from source to multi-receivers and using the suitable coding scheme. Intermediate nodes would combine several packets and relay them as a single packet. Although network coding can make a network achieve the maximum multicast rate, it always brings additional overheads. It is necessary to minimize unneeded overhead by using an optimization technique. On other hand, Ant Colony Optimization can be transformed into useful technique that seeks imitate the ant’s behaviour in finding the shortest path to its destination using quantities of pheromone that is left by former ants as guidance, so by using the same concept of the communication network environment, shorter paths can be formulated. The simulation results show that the resultant system considerably improves the performance of the network, by combining Ant Colony Optimization with network coding. 25% improvement in the bandwidth consumption can be achieved in comparison with conventional routing protocols. Additionally simulation results indicate that the proposed algorithm can decrease the computation time of system by a factor of 20%.
1373

Flexible cross layer design for improved quality of service in MANETs

Kiourktsidis, Ilias January 2011 (has links)
Mobile Ad hoc Networks (MANETs) are becoming increasingly important because of their unique characteristics of connectivity. Several delay sensitive applications are starting to appear in these kinds of networks. Therefore, an issue in concern is to guarantee Quality of Service (QoS) in such constantly changing communication environment. The classical QoS aware solutions that have been used till now in the wired and infrastructure wireless networks are unable to achieve the necessary performance in the MANETs. The specialized protocols designed for multihop ad hoc networks offer basic connectivity with limited delay awareness and the mobility factor in the MANETs makes them even more unsuitable for use. Several protocols and solutions have been emerging in almost every layer in the protocol stack. The majority of the research efforts agree on the fact that in such dynamic environment in order to optimize the performance of the protocols, there is the need for additional information about the status of the network to be available. Hence, many cross layer design approaches appeared in the scene. Cross layer design has major advantages and the necessity to utilize such a design is definite. However, cross layer design conceals risks like architecture instability and design inflexibility. The aggressive use of cross layer design results in excessive increase of the cost of deployment and complicates both maintenance and upgrade of the network. The use of autonomous protocols like bio-inspired mechanisms and algorithms that are resilient on cross layer information unavailability, are able to reduce the dependence on cross layer design. In addition, properties like the prediction of the dynamic conditions and the adaptation to them are quite important characteristics. The design of a routing decision algorithm based on Bayesian Inference for the prediction of the path quality is proposed here. The accurate prediction capabilities and the efficient use of the plethora of cross layer information are presented. Furthermore, an adaptive mechanism based on the Genetic Algorithm (GA) is used to control the flow of the data in the transport layer. The aforementioned flow control mechanism inherits GA’s optimization capabilities without the need of knowing any details about the network conditions, thus, reducing the cross layer information dependence. Finally, is illustrated how Bayesian Inference can be used to suggest configuration parameter values to the other protocols in different layers in order to improve their performance.
1374

Improving fairness and utilisation in ad hoc networks

Arabi, Mohamed January 2012 (has links)
Ad hoc networks represent the current de-facto alternative for infrastructure-less environments, due to their self-configuring and resilience characteristics. Ad hoc networks flexibility benefits, such as unrestrained computing, lack of centralisation, and ease of deployment at low costs, are tightly bound with relevant deficiencies such as limited resources and management difficulty. Ad hoc networks witnessed high attention from the research community due to the numerous challenges faced when deploying such a technology in real scenarios. Starting with the nature of the wireless environment, which raises significant transmission issues when compared with the wired counterpart, ad hoc networks require a different approach when addressing the data link problems. Further, the high packet loss due to wireless contention, independent of network congestion, requires a different approach when considering quality of service degradation and unfair channel resources distribution among competing flows. Although these issues have already been considered to some extent by researchers, there is still room to improve quality of service by reducing the effect of packet loss and fairly distributing the medium access among competing nodes. The aim of this thesis is to propose a set of mechanisms to alleviate the effect of packet loss and to improve fairness in ad hoc networks. A transport layer algorithm has been proposed to overcome the effects of hidden node collisions and to reduce the impact of wireless link contention by estimating the four hop delay and pacing packet transmissions accordingly. Furthermore, certain topologies have been identified, in which the standard IEEE 802.11 faces degradation in channel utilisation and unfair bandwidth allocation. Three link layer mechanisms have been proposed to tackle the challenges the IEEE 802.11 faces in the identified scenarios to impose fairness in ad hoc networks through fairly distributing channel resources between competing nodes. These mechanisms are based on monitoring the collision rate and penalising the greedy nodes where no competing nodes can be detected but interference exists, monitoring traffic at source nodes to police access to the channel where only source nodes are within transmission range of each other, and using MAC layer acknowledgements to flag unfair bandwidth allocation in topologies where only the receivers are within transmission range of each other. The proposed mechanisms have been integrated into a framework designed to adapt and to dynamically select which mechanism to adopt, depending on the network topology. It is important to note that the proposed mechanisms and framework are not alternatives to the standard MAC protocol but are an enhancement and are triggered by the failure of the IEEE 802.11 protocol to distribute the channel resources fairly. All the proposed mechanisms have been validated through simulations and the results obtained from the experiments show that the proposed schemes fairly distribute channel resources fairly and outperform the performance of the IEEE 802.11 protocol in terms of channel utilisation as well as fairness.
1375

Evolution of grasping behaviour in anthropomorphic robotic arms with embodied neural controllers

Massera, Gianluca January 2012 (has links)
The works reported in this thesis focus upon synthesising neural controllers for anthropomorphic robots that are able to manipulate objects through an automatic design process based on artificial evolution. The use of Evolutionary Robotics makes it possible to reduce the characteristics and parameters specified by the designer to a minimum, and the robot’s skills evolve as it interacts with the environment. The primary objective of these experiments is to investigate whether neural controllers that are regulating the state of the motors on the basis of the current and previously experienced sensors (i.e. without relying on an inverse model) can enable the robots to solve such complex tasks. Another objective of these experiments is to investigate whether the Evolutionary Robotics approach can be successfully applied to scenarios that are significantly more complex than those to which it is typically applied (in terms of the complexity of the robot’s morphology, the size of the neural controller, and the complexity of the task). The obtained results indicate that skills such as reaching, grasping, and discriminating among objects can be accomplished without the need to learn precise inverse internal models of the arm/hand structure. This would also support the hypothesis that the human central nervous system (cns) does necessarily have internal models of the limbs (not excluding the fact that it might possess such models for other purposes), but can act by shifting the equilibrium points/cycles of the underlying musculoskeletal system. Consequently, the resulting controllers of such fundamental skills would be less complex. Thus, the learning of more complex behaviours will be easier to design because the underlying controller of the arm/hand structure is less complex. Moreover, the obtained results also show how evolved robots exploit sensory-motor coordination in order to accomplish their tasks.
1376

Markov chains for sampling matchings

Matthews, James January 2008 (has links)
Markov Chain Monte Carlo algorithms are often used to sample combinatorial structures such as matchings and independent sets in graphs. A Markov chain is defined whose state space includes the desired sample space, and which has an appropriate stationary distribution. By simulating the chain for a sufficiently large number of steps, we can sample from a distribution arbitrarily close to the stationary distribution. The number of steps required to do this is known as the mixing time of the Markov chain. In this thesis, we consider a number of Markov chains for sampling matchings, both in general and more restricted classes of graphs, and also for sampling independent sets in claw-free graphs. We apply techniques for showing rapid mixing based on two main approaches: coupling and conductance. We consider chains using single-site moves, and also chains using large block moves. Perfect matchings of bipartite graphs are of particular interest in our community. We investigate the mixing time of a Markov chain for sampling perfect matchings in a restricted class of bipartite graphs, and show that its mixing time is exponential in some instances. For a further restricted class of graphs, however, we can show subexponential mixing time. One of the techniques for showing rapid mixing is coupling. The bound on the mixing time depends on a contraction ratio b. Ideally, b < 1, but in the case b = 1 it is still possible to obtain a bound on the mixing time, provided there is a sufficiently large probability of contraction for all pairs of states. We develop a lemma which obtains better bounds on the mixing time in this case than existing theorems, in the case where b = 1 and the probability of a change in distance is proportional to the distance between the two states. We apply this lemma to the Dyer-Greenhill chain for sampling independent sets, and to a Markov chain for sampling 2D-colourings.
1377

Improved Approximation Algorithms for Geometric Packing Problems With Experimental Evaluation

Song, Yongqiang 12 1900 (has links)
Geometric packing problems are NP-complete problems that arise in VLSI design. In this thesis, we present two novel algorithms using dynamic programming to compute exactly the maximum number of k x k squares of unit size that can be packed without overlap into a given n x m grid. The first algorithm was implemented and ran successfully on problems of large input up to 1,000,000 nodes for different values. A heuristic based on the second algorithm is implemented. This heuristic is fast in practice, but may not always be giving optimal times in theory. However, over a wide range of random data this version of the algorithm is giving very good solutions very fast and runs on problems of up to 100,000,000 nodes in a grid and different ranges for the variables. It is also shown that this version of algorithm is clearly superior to the first algorithm and has shown to be very efficient in practice.
1378

Multiple Change-Point Detection: A Selective Overview

Niu, Yue S., Hao, Ning, Zhang, Heping 11 1900 (has links)
Very long and noisy sequence data arise from biological sciences to social science including high throughput data in genomics and stock prices in econometrics. Often such data are collected in order to identify and understand shifts in trends, for example, from a bull market to a bear market in finance or from a normal number of chromosome copies to an excessive number of chromosome copies in genetics. Thus, identifying multiple change points in a long, possibly very long, sequence is an important problem. In this article, we review both classical and new multiple change-point detection strategies. Considering the long history and the extensive literature on the change-point detection, we provide an in-depth discussion on a normal mean change-point model from aspects of regression analysis, hypothesis testing, consistency and inference. In particular, we present a strategy to gather and aggregate local information for change-point detection that has become the cornerstone of several emerging methods because of its attractiveness in both computational and theoretical properties.
1379

Calibration and validation of high frequency radar for ocean surface current mapping

Kim, Kyung Cheol 06 1900 (has links)
Approved for public release, distribution is unlimited / High Frequency (HF) radar backscatter instruments are being developed and tested in the marine science and defense science communities for their abilities to sense surface parameters remotely in the coastal ocean over large areas. In the Navy context, the systems provide real-time mapping of ocean surface currents and waves critical for characterizing and forecasting the battle space environment. In this study, the performance of a network of four CODAR (Coastal Ocean Dynamics Application Radar) SeaSonde HF radars, using the Multiple Signal Classification (MUSIC) algorithm for direction finding, is described for the period between July to September 2003. Comparisons are made in Monterey Bay with moored velocity observations, with four radar baseline pairs, and with velocity observations from sixteen drifter deployments. All systems measure ocean surface current and all vector currents are translated into radial current components in the direction of the various radar sites. Measurement depths are 1 m for the HF radar-derived currents, 12 to 20 m for the ADCP bin nearest to the surface at the M1 mooring site, and 8 m for the drifter-derived velocity estimates. Comparisons of HF radar-M1 mooring buoy, HF radar-HF radar (baseline), and HF radar-drifter data yield improvements of - 1.7 to 16.7 cm/s rms differences and -0.03 to 0.35 correlation coefficients when measured antenna patterns are used. The mooring comparisons and the radar-to-radar baseline comparisons indicate angular shifts of 10Ê» to 30Ê» for radial currents produced using ideal antenna patterns and 0Ê» to 15Ê» angular shifts for radial currents produced using measured patterns. The comparisons with drifter-derived radial currents indicate that these angular biases are not constant across all look directions, even though the local antenna pattern distortions were taken into account through the use of measured antenna patterns. In particular, data from the SCRZ and MLNG radar sites show varied pointing errors across the range of angles covered. / Lieutenant Commander, Republic of Korea Navy
1380

INFLUENCE ANALYSIS TOWARDS BIG SOCIAL DATA

Han, Meng 03 May 2017 (has links)
Large scale social data from online social networks, instant messaging applications, and wearable devices have seen an exponential growth in a number of users and activities recently. The rapid proliferation of social data provides rich information and infinite possibilities for us to understand and analyze the complex inherent mechanism which governs the evolution of the new technology age. Influence, as a natural product of information diffusion (or propagation), which represents the change in an individual’s thoughts, attitudes, and behaviors resulting from interaction with others, is one of the fundamental processes in social worlds. Therefore, influence analysis occupies a very prominent place in social related data analysis, theory, model, and algorithms. In this dissertation, we study the influence analysis under the scenario of big social data. Firstly, we investigate the uncertainty of influence relationship among the social network. A novel sampling scheme is proposed which enables the development of an efficient algorithm to measure uncertainty. Considering the practicality of neighborhood relationship in real social data, a framework is introduced to transform the uncertain networks into deterministic weight networks where the weight on edges can be measured as Jaccard-like index. Secondly, focusing on the dynamic of social data, a practical framework is proposed by only probing partial communities to explore the real changes of a social network data. Our probing framework minimizes the possible difference between the observed topology and the actual network through several representative communities. We also propose an algorithm that takes full advantage of our divide-and-conquer strategy which reduces the computational overhead. Thirdly, if let the number of users who are influenced be the depth of propagation and the area covered by influenced users be the breadth, most of the research results are only focused on the influence depth instead of the influence breadth. Timeliness, acceptance ratio, and breadth are three important factors that significantly affect the result of influence maximization in reality, but they are neglected by researchers in most of time. To fill the gap, a novel algorithm that incorporates time delay for timeliness, opportunistic selection for acceptance ratio, and broad diffusion for influence breadth has been investigated. In our model, the breadth of influence is measured by the number of covered communities, and the tradeoff between depth and breadth of influence could be balanced by a specific parameter. Furthermore, the problem of privacy preserved influence maximization in both physical location network and online social network was addressed. We merge both the sensed location information collected from cyber-physical world and relationship information gathered from online social network into a unified framework with a comprehensive model. Then we propose the resolution for influence maximization problem with an efficient algorithm. At the same time, a privacy-preserving mechanism are proposed to protect the cyber physical location and link information from the application aspect. Last but not least, to address the challenge of large-scale data, we take the lead in designing an efficient influence maximization framework based on two new models which incorporate the dynamism of networks with consideration of time constraint during the influence spreading process in practice. All proposed problems and models of influence analysis have been empirically studied and verified by different, large-scale, real-world social data in this dissertation.

Page generated in 0.2496 seconds