• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 8
  • 2
  • 1
  • 1
  • 1
  • 1
  • 1
  • 1
  • Tagged with
  • 11
  • 11
  • 5
  • 4
  • 3
  • 3
  • 3
  • 3
  • 3
  • 2
  • 2
  • 2
  • 2
  • 2
  • 2
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
1

High-Q resonant circuits in the frequency range 600 to 1600 MCS using parallel-wire transmissions lines.

Pearse, Charles. D. January 1954 (has links)
This thesis describes the measurement of the selectivity factors and radiation resistances of resonant circuits consisting of sections of open parallel-wire transmission line terminated at each end by equal diameter transverse metallic discs. Selectivity factors of 2,000 to 3,500 are easily obtained with these circuits. These high Q's occur for resonant sections, only a few half-wavelengths in length, when the diameters of the discs are in the broad region of 2.5 lambda to 3.5 lambda.
2

High-Q resonant circuits in the frequency range 600 to 1600 MCS using parallel-wire transmissions lines.

Pearse, Charles. D. January 1954 (has links)
No description available.
3

Task Scheduling Technlques for Distrlbuted/Parallel Processing Systems

Sreenivasan, C R 04 1900 (has links)
Indian Institute of Science / This dissertation discusses the principles, techniques and approaches adopted in the design of task scheduling algorithms for Distributed Parallel Processing Computer Systems (DPCSs) connected with network of front-end systems (FSs), The primary goal in the design of scheduling algorithms is to minimise the total turnaround time of the jobs to be scheduled by maximizing the utilisation of the resources of the DPCS with minimum data communication overhead, The users present their jobs to be scheduled at the FS, The FS receives a job and generates a finite set of independent tasks based on mutually independent sections having inherent parallelism, Each task could be scheduled to different available processors of DPCS for concurrent execution, The tasks are of three groups viz,, compute intensive tasks, input. output intensive tasks and the combination of compute and input-output intensive tasks. They may have the execution time almost the same. Some of the tasks may have the execution time larger due to precedence constraints than that of other tasks and they are provided with logical breakpoints which can be utilised to further break the tasks into subtasks during scheduling, The technique of using breakpoint of the tasks is more appropriate when the number of available processors is more than the number of tasks to be scheduled. The tasks of a job thus generated are sent to the front-end processor (FEP or the host processor) of the DPCS in the form of data flow graph (DFG), The DFG is used to model the tasks and represent the precedence (or data dependencies) among the tasks, In order to preserve the constraints among the tasks during scheduling and realise efficient utilisation of the resources of DPCS, the DFG is structured in the form of levels, The FBP of DPCS has a resident Task Manager (TM). The key function of the TM is to schedule the tasks to the appropriate processors of DPCS either statically or dynamically based on the required resources. To realise efficient scheduling and utilisation of the processors of DPCS, the TM uses a set of buffers known as Task Forwarding Buffer (TFB), Task Output Buffer (TOB) and Task Status Buffer (TSB) maintained by the FEP of DPCS. The tasks of a job from the FS are received at the TFB. The TM picks up a set of tasks pertaining to a level for scheduling into a temporary buffer C and obtains the status of the processors of DPCS. In order to realise both static and dynamic approaches of allocation, task to processor relation is considered in the scheduling algorithm. If the number of tasks in C is equal to or greater than the number of processors available, one task per processor is allocated, the remaining tasks of C are scheduled subsequently as and when the processors become available. This method of allocation is called static approach. If the number of tasks in C is less than the number of processors available, the TM makes use of the logical breakpoints of the tasks to generate subtasks equal to the number of available processors. Each subtask is scheduled to a processor. This method of scheduling is called the dynamic approach. In all the case the precedence constraints among the tasks are preserved by scheduling the successor task to the parent processor or near neighbouring processor, maintaining minimum data communication between them. Various examples of Computational Fluid Dynamics problems' were tested and the objective of reduced total turnaround time and maximum utilisation of the processors was achieved. The total turnaround time achieved for different jobs varies between 51% and 86% with static approach and 16% and 89% with dynamic approach. The utilisation of the processors varies between the 50% and 92.5%. Hence a speed-up of 5 to 8 folds is realised.
4

Lambda Bipolar Transistor (LBT) in Static Random Access Memory Cell

Sarkar, Manju 06 1900 (has links)
With a view to reduce the number of components in a Static Random Access Memory (SRAM) cell, the feasibility of use of Lambda Bipolar Transistor (LBT)in the bistable element of the cell has been explored under the present study. The LBT under consideration here comprises of an enhancement mode MOSFET integrated with a parasitic bipolar transistor so as to perform as a negative resistance device. LBTs for the study have been fabricated and analysed. The devices have been shown to function at much lower voltage and current levels than those reported earlier/ and thus have been shown to be suitable for lower power applications. The issues of agreements and discrepancies of the experimental results with the original DC model of the device have been highlighted and discussed. The factors contributing to the drain current of the MOSFET in the LBT have been identified. It has also been shown that in the real case of an LBT in operation, the MOSFET in it does not function as a discrete device for the same conditions of voltages and current levels as in an LBT. As per the present study, it is assessed to be influenced by the presence of the BJT in operation and this effect is felt more at the lower current levels of operation. With a separate and tailored p-well implantation the possibility of fabrication of LBTs with a CMOS technology is established. Along with a couple of polysilicon resistors, the LBTs have been successfully made to perform in the common-collector configuration as the bistable storage element of SRAM cell (as proposed in the literature). The bistable element with the LBT in common-emitter mode also has been visualised and practically achieved with the fabricated devices. The WRITE transients for either case have been simulated for various levels of WRITE voltages and their time of hold.The speed of Writing achieved are found comparable with that of the standard SRAMs. The advantages and disadvantages of using the LBT in either mode have been highlighted and discussed. The power consumption of the bistable element with the LBT in either mode is however shown to be the same. A different approach of READING has been proposed to overcome the factors known to increase the cycle time. On the whole, under the present study, the proposal of using LBTs in the bistable storage element of the SRAM cell has been shown to be feasible. Such SRAM circuits can find possible applications in the fields where smaller circuit area is the major concern.
5

Why only two ears? Some indicators from the study of source separation using two sensors

Joseph, Joby 08 1900 (has links)
In this thesis we develop algorithms for estimating broadband source signals from a mixture using only two sensors. This is motivated by what is known in the literature as cocktail party effect, the ability of human beings to listen to the desired source from a mixture of sources with at most two ears. Such a study lets us, achieve a better understanding of the auditory pathway in the brain and confirmation of the results from physiology and psychoacoustics, have a clue to search for an equivalent structure in the brain which corresponds to the modification which improves the algorithm, come up with a benchmark system to automate the evaluation of the systems like 'surround sound', perform speech recognition in noisy environments. Moreover, it is possible that, what we learn about the replication of the functional units in the brain may help us in replacing those using signal processing units for patients suffering due to the defects in these units. There are two parts to the thesis. In the first part we assume the source signals to be broadband and having strong spectral overlap. Channel is assumed to have a few strong multipaths. We propose an algorithm to estimate all the strong multi-paths from each source to the sensors for more than two sources with measurement from two sensors. Because the channel matrix is not invertible when the number of sources is more than the number of sensors, we make use of the estimates of the multi-path delays for each source to improve the SIR of the sources. In the second part we look at a specific scenario of colored signals and channel being one with a prominent direct path. Speech signals as the sources in a weakly reverberant room and a pair of microphones as the sensors satisfy these conditions. We consider the case with and without a head like structure between the microphones. The head like structure we used was a cubical block of wood. We propose an algorithm for separating sources under such a scenario. We identify the features of speech and the channel which makes it possible for the human auditory system to solve the cocktail party problem. These properties are the same as that satisfied by our model. The algorithm works well in a partly acoustically treated room, (with three persons speaking and two microphones and data acquired using standard PC setup) and not so well in a heavily reverberant scenario. We see that there are similarities in the processing steps involved in the algorithm and what we know of the way our auditory system works, especially so in the regions before the auditory cortex in the auditory pathway. Based on the above experiments we give reasons to support the hypothesis about why all the known organisms need to have only two ears and not more but may have more than two eyes to their advantage. Our results also indicate that part of pitch estimation for individual sources might be occurring in the brain after separating the individual source components. This might solve the dilemma of having to do multi-pitch estimation. Recent works suggest that there are parallel pathways in the brain up to the primary auditory cortex which deal with temporal cue based processing and spatial cue based processing. Our model seem to mimic the pathway which makes use of the spatial cues.
6

Methods for Blind Separation of Co-Channel BPSK Signals Arriving at an Antenna Array and Their Performance Analysis

Anand, K 07 1900 (has links)
Capacity improvement of Wireless Communication Systems is a very important area of current research. The goal is to increase the number of users supported by the system per unit bandwidth allotted. One important way of achieving this improvement is to use multiple antennas backed by intelligent signal processing. In this thesis, we present methods for blind separation of co-channel BPSK signals arriving at an antenna array. These methods consist of two parts, Constellation Estimation and Assignment. We give two methods for constellation estimation, the Smallest Distance Clustering and the Maximum Likelihood Estimation. While the latter is theoretically sound,the former is Computationally simple and intuitively appealing. We show that the Maximum Likelihood Constellation Estimation is well approximated by the Smallest Distance Clustering Algorithm at high SNR. The Assignment Algorithm exploits the structure of the BPSK signals. We observe that both the methods for estimating the constellation vectors perform very well at high SNR and nearly attain Cramer-Rao bounds. Using this fact and noting that the Assignment Algorithm causes negligble error at high SNR, we derive an upper bound on the probability of bit error for the above methods at high SNR. This upper bound falls very rapidly with increasing SNR, showing that our constellation estimation-assignment approach is very efficient. Simulation results are given to demonstrate the usefulness of the bounds.
7

Design and Development of a Hybrid TDMA/CDMA MAC Protocol for Multimedia Wireless Networks

D, Rajaveerappa 04 1900 (has links)
A wireless local area network (WLAN) provides high bandwidth to users in a limited geographical area. This network faces certain challenges and constraints that are not imposed on their wired counterparts. They are: frequency allocation, interference and reliability, security, power consumption, human safety, mobility, connection to wired LAN,service area, handoff and roaming, dynamic configuration and the throughput. But the wireless medium relies heavily on the features of MAC protocol and the MAC protocol is the core of medium access control for WLANs. The available MAC protocols all have their own merits and demerits. In our research works, we propose a hybrid MAC protocol forWLAN. In the design, we have combined the merits of the TDMA and CDMA systems to improve the throughput of the WLAN in a picocellular environment. We have used the reservation and polling methods of MAC protocols to handle both the low and high data traffics of the mobile users. We have strictly followed the standards specified by IEEE 802.11 for WLANs to implement the designed MAC protocol. We have simulated the hybrid TDMA/CDMA based MAC protocols combined with RAP (Randomly Addressed Polling) for Wireless Local Area Networks. We have developed a closed form mathematical expressions analytically for this protocol. We have also studied the power control aspects in this environment and we derived a closed form mathematical expressions analytically for this power control technique. This hybrid protocol is capable of integrating different types of traffic (like CBR,VBR and ABR services) and compiles with the requirements of next-generation systems.The lower traffic arrival is dealt with the Random Access and the higher traffic arrival is with the Polling methods. This enables us to obtain higher throughput and lowmean delay performance compared to the contention-reservation-based MAC schemes. The protocol offers the ability to integrate different types of services in a flexible way by the use of multiple slots per frame, while CDMA allows multiple users to transmit simultaneously using their own codes. The RAP uses an efficient "back-off" algorithm to improve throughput at higher arrival rates of user's data. The performance is evaluated in terms of throughput, delay, and rejection rate using computer simulation. A detailed simulation is carried out regarding the maximum number of users that each base station can support on a lossy channel. This work has analyzed the desired user's signal quality in a single cell CDMA (Code Division Multiple Access) system in the presence of MAI (Multiple Access Interference). Earlier power control techniques were designed to assure that all signals are received with equal power levels. Since these algorithms are designed for a imperfect control of power, the capacity of the system is reduced for a given BER (Bit-Error Rate). We proposed an EPCM (Efficient Power Control Mechanism) based system capacity which is designed for the reverse link (mobile to base station) considering the path loss, log-normal shadowing and Rayleigh fading. We have simulated the following applications for the further improvement of the performance of the designed MAC protocol:Designed protocol is tested under different traffic conditions. The protocol is tested for multimedia traffic under application oriented QoS requirements. Buffer Management and resource allocation. Call Admission Control (hand-offs, arrival of new users). The adaptability to the variable nature of traffic.The propagation aspects in the wireless medium. The proposed MAC protocol has been simulated and analysed by using C++/MATLAB Programming in IBM/SUN-SOLARIS UNIX environment. The results were plotted using MATLAB software. All the functions of the protocol have been tested by an analysis and also by simulation. Call admission control function of the protocol has been tested by simulation and analysis in a multimedia wireless network topology and from analysis we found that at low traffic the throughput is high and at high traffic the throughput is kept constant at a reasonable high value. The simulation results also justify/ coordinate the analysis results. Dynamic channel allocation function of the protocol was tested and analysed and the coordinated results show that at low traffic, high throughput and at high traffic the throughput is constant. Buffer management function of the protocol simulation shows the results that the packet loss can be controlled to a minimum by adjusting the buffer threshold level at any traffic conditions. Maintenance of data transfer during the hand-offs function was simulated and the results show that the blocked calls are less during low traffic and at high traffic the blocked calls can be kept constant at low value. Thus, the proposed model aimed at having high throughput, high spectral efficiency, low delay, moderate BER and moderate blocking probability. We have considered a pico cell with a maximum of several users and studied the power efficiency of combined channel coding and modulation with perfect power controlled CDMA system. Thus our simulation of the "software radio" has flexibility in choosing the proper channel coders dynamically depending upon the variations of AWGN channel.
8

Performance Analysis Of Root-MUSIC With Spatial Smoothing For Arbitrary And Uniform Circular Arrays

Reddy, K Maheswara 07 1900 (has links) (PDF)
No description available.
9

Compressed Domain Processing of MPEG Audio

Anantharaman, B 03 1900 (has links)
MPEG audio compression techniques significantly reduces the storage and transmission requirements for high quality digital audio. However, compression complicates the processing of audio in many applications. If a compressed audio signal is to be processed, a direct method would be to decode the compressed signal, process the decoded signal and re-encode it. This is computationally expensive due to the complexity of the MPEG filter bank. This thesis deals with processing of MPEG compressed audio. The main contributions of this thesis are a) Extracting wavelet coefficients in the MPEG compressed domain. b) Wavelet based pitch extraction in MPEG compressed domain. c) Time Scale Modifications of MPEG audio. d) Watermarking of MPEG audio. The research contributions starts with a technique for calculating several levels of wavelet coefficients from the output of the MPEG analysis filter bank. The technique exploits the toeplitz structure which arises when the MPEG and wavelet filter banks are represented in a matrix form, The computational complexity for extracting several levels of wavelet coefficients after decoding the compressed signal and directly from the output of the MPEG analysis filter bank are compared. The proposed technique is found to be computationally efficient for extracting higher levels of wavelet coefficients. Extracting pitch in the compressed domain becomes essential when large multimedia databases need to be indexed. For example one may be interested in listening to a particular speaker or to listen to male female audio segments in a multimedia document. For this application, pitch information is one of the very basic and important features required. Pitch is basically the time interval between two successive glottal closures. Glottal closures are accompanied by sharp transients in the speech signal which in turn gives rise to a local maxima in the wavelet coefficients. Pitch can be calculated by finding the time interval between two successive maxima in the wavelet coefficients. It is shown that the computational complexity for extracting pitch in the compressed domain is less than 7% of the uncompressed domain processing. An algorithm for extracting pitch in the compressed domain is proposed. The result of this algorithm for synthetic signals, and utterances of words by male/female is reported. In a number of important applications, one needs to modify an audio signal to render it more useful than its original. Typical applications include changing the time evolution of an audio signal (increase or decrease the rate of articulation of a speaker),or to adapt a given audio sequence to a given video sequence. In this thesis, time scale modifications are obtained in the subband domain such that when the modified subband signals are given to the MPEG synthesis filter bank, the desired time scale modification of the decoded signal is achieved. This is done by making use of sinusoidal modeling [I]. Here, each of the subband signal is modeled in terms of parameters such as amplitude phase and frequencies and are subsequently synthesised by using these parameters with Ls = k La where Ls is the length of the synthesis window , k is the time scale factor and La is the length of the analysis window. As the PCM version of the time scaled signal is not available, psychoacoustic model based bit allocation cannot be used. Hence a new bit allocation is done by using a subband coding algorithm. This method has been satisfactorily tested for time scale expansion and compression of speech and music signals. The recent growth of multimedia systems has increased the need for protecting digital media. Digital watermarking has been proposed as a method for protecting digital documents. The watermark needs to be added to the signal in such a way that it does not cause audible distortions. However the idea behind the lossy MPEC encoders is to remove or make insignificant those portions of the signal which does not affect human hearing. This renders the watermark insignificant and hence proving ownership of the signal becomes difficult when an audio signal is compressed. The existing compressed domain methods merely change the bits or the scale factors according to a key. Though simple, these methods are not robust to attacks. Further these methods require original signal to be available in the verification process. In this thesis we propose a watermarking method based on spread spectrum technique which does not require original signal during the verification process. It is also shown to be more robust than the existing methods. In our method the watermark is spread across many subband samples. Here two factors need to be considered, a) the watermark is to be embedded only in those subbands which will make the addition of the noise inaudible. b) The watermark should be added to those subbands which has sufficient bit allocation so that the watermark does not become insignificant due to lack of bit allocation. Embedding the watermark in the lower subbands would cause distortion and in the higher subbands would prove futile as the bit allocation in these subbands are practically zero. Considering a11 these factors, one can introduce noise to samples across many frames corresponding to subbands 4 to 8. In the verification process, it is sufficient to have the key/code and the possibly attacked signal. This method has been satisfactorily tested for robustness to scalefactor, LSB change and MPEG decoding and re-encoding.
10

Algorithms For Efficient Implementation Of Secure Group Communication Systems

Rahul, S 11 1900 (has links)
A distributed application may be considered as a set of nodes which are spread across the network, and need to communicate with each other. The design and implementation of these distributed applications is greatly simplified using Group Communication Systems (GCSs) which provide multipoint to multipoint communication. Hence, GCSs can be used as building blocks for implementing distributed applications. The GCS is responsible for reliable delivery of group messages and management of group membership. The peer-to-peer model and the client-server model are the two models of distributed systems for implementing GCSs. In this thesis, our focus is on improving the capability of GCS based on the client-server model. Security is an important requirement of many distributed applications. For such applications, security has to be provided m the GCS itself. The security of a GCS includes confidentiality, authentication and non-repudiation of messages, and ensuring that the GCS is properly meeting its guarantees. The complexity and cost of implementation of the above three types of security guarantees greatly depend on whether the GCS servers are trusted by the group members or not. Making use of the GCS services provided by untrusted GCS servers becomes necessary when the GCS servers are managed by a third party. In this thesis, we have proposed algorithms for ensuring the above three security guarantees for GCSs in which servers are not trusted. As part of the solution, we have proposed a new digital multisignature scheme which allows group members to verify that a message has indeed been signed by all group members. The various group key management algorithms proposed in literature differ from each other with respect to the following four metrics: communication overhead, computational overhead, storage at each member and distribution of load among group members. We identify the need for a distributed group key management algorithm which minimizes the computational overhead on group members and propose an algorithm to achieve it.

Page generated in 0.1325 seconds