• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 542
  • 166
  • 107
  • 75
  • 55
  • 20
  • 18
  • 16
  • 13
  • 10
  • 9
  • 7
  • 5
  • 4
  • 3
  • Tagged with
  • 1187
  • 178
  • 169
  • 144
  • 125
  • 124
  • 118
  • 104
  • 91
  • 88
  • 77
  • 75
  • 74
  • 73
  • 66
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
501

Investigation of IEEE Standard 802.11 Medium Access Control (MAC) Layer in ad-hoc

Garcia Torre, Fernando January 2006 (has links)
This thesis involved a research of mechanisms of MAC layer in the ad-hoc networks environment, the ad-hoc networks in the terminology of the standard are called IBSS Independent Basic Service, these type of networks are very useful in real situation where there are not the possibility of display a infrastructure, when there isn’t a network previous planning. The connection to a new network is one of the different with the most common type of Wireless Local Area Networks (WLAN) that are the ones with infrastructure. The connection is established without the presence of a central station, instead the stations discover the others with broadcast messages in the coverage area of each station. In the context of standard 802.11 networks the communication between the stations is peer to peer, only with one hop. To continue with initiation process is necessary the synchronization between the different stations of his timers. The other capital mechanism that is treated is the medium access mechanism, to hold a shared and unreliable medium, all the heavy of this issue goes to the distributed coordination function DCF. In this moment there is an emergent technology, WIMAX or standard IEEE 802.16, like the standard 802.11 is a wireless communication protocol. Some comparison between the MAC layer mechanisms would be realized between these two standards
502

The Great Synchronization of International Trade Collapse

Antonakakis, Nikolaos January 2012 (has links) (PDF)
In this paper we examine the extent of international trade synchronization during periods of international trade collapses and US recessions. Using dynamic correlations based on monthly trade data for the G7 economies over the period 1961-2011, our results suggest rather idiosyncratic patterns of international trade synchronization during collapses of international trade and US recessions. During the great recession of 2007-2009, however, international trade experienced the most sudden, severe and globally synchronized collapse. (author's abstract)
503

Automated Epileptic Seizure Onset Detection

Dorai, Arvind 21 April 2009 (has links)
Epilepsy is a serious neurological disorder characterized by recurrent unprovoked seizures due to abnormal or excessive neuronal activity in the brain. An estimated 50 million people around the world suffer from this condition, and it is classified as the second most serious neurological disease known to humanity, after stroke. With early and accurate detection of seizures, doctors can gain valuable time to administer medications and other such anti-seizure countermeasures to help reduce the damaging effects of this crippling disorder. The time-varying dynamics and high inter-individual variability make early prediction of a seizure state a challenging task. Many studies have shown that EEG signals do have valuable information that, if correctly analyzed, could help in the prediction of seizures in epileptic patients before their occurrence. Several mathematical transforms have been analyzed for its correlation with seizure onset prediction and a series of experiments were done to certify their strengths. New algorithms are presented to help clarify, monitor, and cross-validate the classification of EEG signals to predict the ictal (i.e. seizure) states, specifically the preictal, interictal, and postictal states in the brain. These new methods show promising results in detecting the presence of a preictal phase prior to the ictal state.
504

Model Synchronization for Software Evolution

Ivkovic, Igor 26 August 2011 (has links)
Software evolution refers to continuous change that a software system endures from inception to retirement. Each change must be efficiently and tractably propagated across models representing the system at different levels of abstraction. Model synchronization activities needed to support the systematic specification and analysis of evolution activities are still not adequately identified and formally defined. In our research, we first introduce a formal notation for the representation of domain models and model instances to form the theoretical basis for the proposed model synchronization framework. Besides conforming to a generic MOF metamodel, we consider that each software model also relates to an application domain context (e.g., operating systems, web services). Therefore, we are addressing the problems of model synchronization by focusing on domain-specific contexts. Secondly, we identify and formally define model dependencies that are needed to trace and propagate changes across system models at different levels of abstraction, such as from design to source code. The approach for extraction of these dependencies is based on Formal Concept Analysis (FCA) algorithms. We further model identified dependencies using Unified Modeling Language (UML) profiles and constraints, and utilize the extracted dependency relations in the context of coarse-grained model synchronization. Thirdly, we introduce modeling semantics that allow for more complex profile-based dependencies using Triple Graph Grammar (TGG) rules with corresponding Object Constraint Language (OCL) constraints. The TGG semantics provide for fine-grained model synchronization, and enable compliance with the Query/View/Transformation (QVT) standards. The introduced framework is assessed on a large, industrial case study of the IBM Commerce system. The dependency extraction framework is applied to repositories of business process models and related source code. The extracted dependencies were evaluated by IBM developers, and the corresponding precision and recall values calculated with results that match the scope and goals of the research. The grammar-based model synchronization and dependency modelling using profiles has also been applied to the IBM Commerce system, and evaluated by the developers and architects involved in development of the system. The results of this experiment have been found to be valuable by stakeholders, and a patent codifying the results has been filed by the IBM organization and has been granted. Finally, the results of this experiment have been formalized as TGG rules, and used in the context of fine-grained model synchronization.
505

Incremental Model Synchronization

Razavi Nematollahi, Ali January 2012 (has links)
Changing artifacts is intrinsic to the development and maintenance of software projects. The changes made to one artifact, however, do not come about in isolation. Software models are often vastly entangled. As such, a minuscule modification in one ripples in- consistency through several others. The primary goal of the this thesis is to investigate techniques and processes for the synchronization of artifacts in model driven development environments in which projects comprise manifold interdependent models, each being a live document that is continuously altered and evolved. The co-evolution of these artifacts demands an efficient mechanism to keep them consistent in such dynamic environments. To achieve this consistency, we intend to explore methods and algorithms for impact anal- ysis and the propagation of modifications across heterogenous interdependent models. In particular, we consider large scale models that are generated from other models by complex artifact generators. After creation, both the generated artifacts, and also the ones they are generated from, are subject to evolutionary changes throughout which their mutual consistency should be maintained. In such situations, the model transformation is the pri- mary benchmark of consistency rules between source and target models. But the rules are often implanted inside the implementation of artifact generators and hence unavailable. Trivially, the artifacts can be synchronized by regeneration. More often than not however, regeneration of such artifacts from scratch tends to be unwieldy due to their massive size. This thesis is a summary of research on effective change management methodologies in the context of model driven development. In particular, it presents two methods of in- crementally synchronizing software models related by existing model transformations, so that the synchronization time is proportional to the magnitude of change and not to the size of models. The first approach treats model transformations as black-boxes and adds to it incremental synchronization by a technique called conceptualization. The black-box is distinguished from other undertakings in that it does not require the extraction, re- engineering and re-implementation of consistency rules embedded inside transformations. The second approach is a white-box approach that uses static analysis to automatically transform the source code of the transformation into an incremental one. In particular it uses partial evaluation to derive a specialized, incremental transformation from the exist- ing one. These two approaches are complementary and together support a comprehensive range of model transformations.
506

Packet CDMA communication without preamble

Rahaman, Md. Sajjad 02 January 2007 (has links)
Code-Division Multiple-Access (CDMA) is one of the leading digital wireless communication methods currently employed throughout the world. Third generation (3G) and future wireless CDMA systems are required to provide services to a large number of users where each user sends data burst only occasionally. The preferred approach is packet based CDMA so that many users share the same physical channel simultaneously. In CDMA, each user is assigned a pseudo-random (PN) code sequence. PN codephase synchronization between received signals and a locally generated replica by the receiver is one of the fundamental requirements for successful implementation of any CDMA technique. The customary approach is to start each CDMA packet with a synchronization preamble which consists of PN code without data modulation. Packets with preambles impose overheads for communications in CDMA systems especially for short packets such as mouse-clicks or ATM packets of a few hundred bits. Thus, it becomes desirable to perform PN codephase synchronization using the information-bearing signal without a preamble. This work uses a segmented matched filter (SMF) which is capable of acquiring PN codephase in the presence of data modulation. Hence the preamble can be eliminated, reducing the system overhead. Filter segmentation is also shown to increase the tolerance to Doppler shift and local carrier frequency offset. <p>Computer simulations in MATLAB® were carried out to determine various performance measures of the acquisition system. Substantial improvement in probability of correct codephase detection in the presence of multiple-access interference and data modulation is obtained by accumulating matched filter samples over several code cycles prior to making the codephase decision. Correct detection probabilities exceeding 99% are indicated from simulations with 25 co-users and 10 kHz carrier frequency offset or Doppler shift by accumulating five or more PN code cycles, using maximum selection detection criterion. Analysis and simulation also shows that cyclic accumulation can improve packet throughput by 50% and by as much as 100% under conditions of high offered traffic and Doppler shift for both fixed capacity and infinite capacity systems.
507

Timing Recovery Based on Per-Survivor Processing

Kovintavewat, Piya 13 October 2004 (has links)
Timing recovery is the processing of synchronizing the sampler with the received analog signal. Sampling at the wrong times can have a devastating impact on performance. Conventional timing recovery techniques are based on a decision-directed phase-locked loop (PLL). They are adequate only when the operating signal-to-noise ratio (SNR) is sufficiently high, but recent advances in error-control coding have made it possible to communicate reliably at very low SNR, where conventional techniques fail. This thesis develops new techniques for timing recovery that are capable of working at low SNR. We propose a new timing recovery scheme based on per-survivor processing (PSP), which jointly performs timing recovery and equalization, by embedding a separate PLL into each survivor of a Viterbi algorithm. The proposed scheme is shown to perform better than conventional scheme, especially when the SNR is low and the timing error is large. An important advantage of this technique is its amenability to real-time implementation. We also propose a new iterative timing recovery scheme that exploits the presence of the error-control code; in doing so, it can perform even better than the PSP scheme described above, but at the expense of increased complexity and the requirement of batch processing. This scheme is realized by embedding the timing recovery process into a trellis-based soft-output equalizer using PSP. Then, this module iteratively exchanges soft information with the error-control decoder, as in conventional turbo equalization. The resulting system jointly performs the functions of timing recovery, equalization, and decoding. The proposed iterative timing recovery scheme is shown to perform better than previously reported iterative timing recovery schemes, especially when the timing error is severe. Finally, performance analysis of iterative timing recovery schemes is difficult because of their high complexity. We propose to use the extrinsic information transfer (EXIT) chart as a tool to predict and compare their performances, considering that the bit-error rate computation takes a significant amount of simulation time. Experimental results indicate that the system performance predicted by the EXIT chart coincides with that obtained by simulating data transmission over a complete iterative receiver, especially when the coded block length is large.
508

A Study of Dynamics of Coupled Nonlinear Circuits

Sanchez, Jose Luis Hernandez 13 January 2005 (has links)
We consider a type of forced "Van Der Pol" oscillator where the forced function is periodic and oscillatory around the t-axis. This problem derived from an electrical model. The important issues here is that this circuits presents the spiking phenomena over a one time period and it has important applications in signal processing and digital communication. The three most important problems that we addressed here in this thesis are to compute the number of spikes a solution completes in one time period (it can be used to transform the analog signal into digital information), how the dynamics of the number of spikes change with respect to the parameters amplitude (k) and frequency (w), and when the coupled circuits synchronize (i.e., the driver and the respond are on synchronous). Sophisticated mathematical and numerical analysis has been developed that enable us to give a complete study of the problems above described.
509

Efficient Conditional Synchronization for Transactional Memory Based System

Naik, Aniket Dilip 10 April 2006 (has links)
Multi-threaded applications are needed to realize the full potential of new chip-multi-threaded machines. Such applications are very difficult to program and orchestrate correctly, and transactional memory has been proposed as a way of alleviating some of the programming difficulties. However, transactional memory can directly be applied only to critical sections, while conditional synchronization remains difficult to implement correctly and efficiently. This dissertation describes EasySync, a simple and inexpensive extension to transactional memory that allows arbitrary conditional synchronization to be expressed in a simple and composable way. Transactional memory eliminates the need to use locks and provides composability for critical sections: atomicity of a transaction is guaranteed regardless of how other code is written. EasySync provides the same benefits for conditional synchronizations: it eliminates the need to use conditional variables, and it guarantees wakeup of the waiting transaction when the real condition it is waiting for is satisfied, regardless of whether other code correctly signals that change. EasySync also allows transactional memory systems to efficiently provide lock-free and condition variable-free conditional critical regions and even more advanced synchronization primitives, such as guarded execution with arbitrary conditional or guard code. Because EasySync informs the hardware the that a thread is waiting, it allows simple and effective optimizations, such as stopping the execution of a thread until there is a change in the condition it is waiting for. Like transactional memory, EasySync is backward compatible with existing code, which we confirm by running unmodified Splash-2 applications linked with an EasySync-based synchronization library. We also re-write some of the synchronization in three Splash-2 applications, to take advantage of better code readability, and to replace spin-waiting with its more efficient EasySync equivalents. Our experimental evaluation shows that EasySync successfully eliminates processor activity while waiting, reducing the number of executed instructions by 8.6% on average in a 16-processor CMP. We also show that these savings increase with the number of processors, and also for applications written for transactional memory systems. Finally, EasySync imposes virtually no performance overheads, and can in fact improve performance.
510

The Baseband Signal Processing and Circuit Design for 868/915MHz Mode of the IEEE802.15.4 Low Rate-Wireless Personal Area Network (LR-WPAN)

Huang, Shih-Hung 14 July 2005 (has links)
The IEEE802.15.4 Low Rate-Wireless Personal Area Network (LR-WLAN) is characterized by its low power consumption, low cost, and reliable data transfer. LR-WPAN can be used for security monitoring, by automatically setting various sensors which can be placed anywhere in the factory or home. This work implements baseband signal processing and circuit design for the 868/915MHz mode of the IEEE802.15.4 LR-WPAN. The development processes include algorithm design, system simulation, FPGA implementation and system measurements. The receiver algorithm includes packet detection, phase mapping, frequency offset estimation, energy detection, synchronization, despreading and differential decoding. All algorithms are completely described herein. The system simulation match the required specifications after running the algorithms. Additionally, algorithms are composed by the Verilog Hardware Description Language (VHDL) form. The process is designed according to the hardware to identify exactly each link . The simulations performed in this work include behavioral simulation and gate level simulation. Finally, the program is uploaded to the FPGA to verify results of the procedures is verified by Matlab, by determining the effects of transmission on the channel signal, including idle signals, initial phase, frequency offset and noise. The frequency offset arises when the oscillators of the transmitter and receiver do not match. The transmitter signal from the logic analyzer is then input to the FPGA. The signal sent from the logic analyzer is tested to determine whether it retains the original transmission signal homology. Finally, a LR-WPAN baseband circuit is successfully developed through by the above procedures.

Page generated in 0.0178 seconds