• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1045
  • 235
  • 136
  • 88
  • 64
  • 31
  • 30
  • 25
  • 22
  • 19
  • 14
  • 12
  • 11
  • 10
  • 10
  • Tagged with
  • 2100
  • 487
  • 296
  • 262
  • 253
  • 192
  • 178
  • 169
  • 141
  • 139
  • 137
  • 126
  • 123
  • 108
  • 103
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
151

On perturbations of delay-differential equations with periodic orbits

Weedermann, Marion 05 1900 (has links)
No description available.
152

Towards More Efficient Delay Measurements on the Internet

Webster, Patrick Jordan 16 December 2013 (has links)
As more applications rely on distributed systems (peer-to-peer services, content distribution networks, cloud services), it becomes necessary to identify hosts that return content to the user with minimal delay. A large scale map of delays would aid in solving this problem. Existing methods, which deploy devices to every region of the Internet or use of a single vantage point have yet to create such a map. While services such as PlanetLab offer a distributed network for measurements, they only cover 0.3% of the Internet. The focus of our research is to increase the speed of the single vantage point approach so that it becomes a feasible solution. We evaluate the feasibility of performing large scale measurements by performing an experiment using more hosts than any previous study. First, an efficient scanning algorithm is developed to perform the measurement scan. We then find that a custom Windows network driver is required to overcome bottlenecks in the operating system. After developing a custom driver, we perform a measurement scan larger than any previous study. Analysis of the results reveals previously unidentified drawbacks to the existing architectures and measurement methodologies. We propose novel meth- ods for increasing the speed of experiments, improving the accuracy of measurement results, and reducing the amount of traffic generated by the scan. Finally, we present architectures for performing an Internet scale measurement scan. We found that with custom drivers, the Windows operating system is a capable platform for performing large scale measurements. Scan results showed that in the eleven years since the original measurement technique was developed, the response patterns it relied upon had changed from what was expected. With our suggested improvements to the measurement algorithm and proposed scanning architectures, it may be possible to perform Internet scale measurement studies in the future.
153

Performance Analysis of Isolated Intersection Traffic Signals

Yin, Kai 16 December 2013 (has links)
This dissertation analyzes two unsolved problems to fulfill the gap in the literature: (1). What is the vehicle delay and intersection capacity considering left-turn traffic at a pre-timed signal? (2). What are the mean and variance of delay to vehicles at a vehicle-actuated signal? The first part of this research evaluates the intersection performance in terms of capacity and delay at an isolated pre-timed signal intersection. Despite of a large body of literature on pre-timed signals, few work has examined the interactions be- tween left-turn and through vehicles. Usually a protected left-turn signal phase, before (leading) or after (lagging) through signal, is applied to a signalized inter- section when the traffic demand is relatively high. A common problem for leading left-turn operation is the blockage to left-turn vehicles by through traffic, particularly at an intersection with a short left-turn bay. During the peak hour, some vehicles on the through lane might not be able to depart at the end of a cycle, resulting in an increased probability of left-turn blockage. In turn, the blocked left-turn vehicles may also delay the through traffic to enter the intersection during the following cycle. Those problems may not exist for a lagging left-turn operation, since left-turn vehicles intend to spill out of the bay under heavy traffic. In this case, the through capacity is reduced, leading to an increase of total delay. All of these factors contribute to the difficulties of estimating the delay and capacity for an isolated intersection. In order to examine this missing part of study on the signalized intersection, two probabilistic models are proposed to deal with the left-turn bay blockage and queue spillback in a heuristic manner. Numerical case studies are also provided to test the proposed models. The second part of this research studies an isolated intersection with vehicle-actuated signal. Typically an advanced detector is located at a distance prior to the intersection such that an arriving vehicle triggers a green time extension in or- der to pass through without any stop. This extended time period actuated by the vehicle is called unit extension in this study. If no vehicle actuation occurs during a unit extension, the green phase would terminate in order to clear queues in other approaches. In this way, the actuated system dynamically allocates the green time among multiple approaches according to vehicle arrivals. And the unit extension is the only control parameter in this case. We develop a model to study the vehicle delay under a general arrival distribution with a given unit extension. Our model allows optimizing the intersection performance over the unit extension. The third part of this research applies graphical methods and diffusion approximations to the traffic signal problems. We reinterpret a graphical method which is originally proposed by Newell in order to directly measure the variance of the time for the queue clearance at a signalized intersection, which remains yet to be carefully examined in practice and would be rather challenging if only using the conventional queuing techniques. Our results demonstrate that graphical method explicitly presents both the deterministic and stochastic delay. We also illustrate that the theoretical background for the graphical methods in this particular application is inherently the diffusion approximation. Furthermore, we investigate the problems of disruptions occurred during a pre-timed traffic signal cycle. By diffusion approximation, we provide quantitative estimation on the duration that the effects of disruptions would dissipate.
154

Mobile WiMAX: Pre-handover optimization using hybrid base station selection procedure

Mandal, Arpan January 2008 (has links)
A major consideration for mobile WiMAX is seamless handoff. The British English term for transferring a cellular call is handover whereas the Americans prefer to call it handoff. Cellular-based standards have the advantage of many years experience in handover for voice calls, while for broadband mobility in itself is no mean feat, and handover is still a challenge. Mobile IP, with "slow" handover, will be fine for web-browsing but not good enough for decent voice quality. Many services require the appearance of seamless connections (VoIP, VPNs, etc). Much of the complexity (and latency) in the cellular network is from maintaining these connections across cell boundaries. Handovers in wireless technologies have always been a challenging topic of discussion. According to the mobility framework of IEEE 802.16e, a Mobile Station (MSS) should scan the neighbouring Base Stations (BSs) for selecting the best BS for a potential handover. However, the standard does not specify the number of BSs to be scanned leaving room for unnecessary scanning. Moreover, prolonged scanning also interrupts data transmissions thus degrading the QoS of an ongoing connection. Reducing unnecessary scanning is an important issue. This thesis proposes a scheme to reduce the number of BSs to scan, thus improving the overall handover performance. Simulation results show that the proposed hybrid predictive BS selection scheme for potential scanning activities is more effective than the conventional IEEE 802.16e handover scheme in terms of handover delay and resource wastage. Before the actual handover process, there is scope of reducing the total number of iterations of message exchanges occurring between the mobile MSS, the SBS and the neighbouring BSs which are potential targets for handover. Simulations prove that it takes upto 700 ms to decide the target BS before initiating the handover process with it. There are multiple message exchanges to choose a set of potential target BSs from all the neighbouring BSs. A few more messages flow between the MSS, SBS and potential target BSs to choose the best candidate BS for handover. The many stages and messages waste time and could be reduced. This thesis discusses some ways to reduce them and backs it up with simulation results.
155

The ascertainment of claims for delay and disruption

Bloore, Richard David Stanford January 1991 (has links)
No description available.
156

Steady State/Hopf Interactions in the Van Der Pol Oscillator with Delayed Feedback

Bramburger, Jason 12 July 2013 (has links)
In this thesis we consider the traditional Van der Pol Oscillator with a forcing dependent on a delay in feedback. The delay is taken to be a nonlinear function of both position and velocity which gives rise to many different types of bifurcations. In particular, we study the Zero-Hopf bifurcation that takes place at certain parameter values using methods of centre manifold reduction of DDEs and normal form theory. We present numerical simulations that have been accurately predicted by the phase portraits in the Zero-Hopf bifurcation to confirm our numerical results and provide a physical understanding of the oscillator with the delay in feedback.
157

Psychological theories of hyperactivity : a behaviour genetic approach

Kuntsi, Jonna Pauliina January 1998 (has links)
This study was an attempt to combine two research literatures on hyperactivity: the behaviour genetic research and the studies testing psychological theories of hyperactivity. We obtained behavioural ratings from the teachers of 1316 twin pairs, aged 7-12, from the general population. For a subsample of 268 twin pairs we obtained ratings also from their parents. Forty-six hyperactive twin pairs (pairs in which at least one twin was pervasively hyperactive) and 47 control twin pairs were then assessed on tests relating to three theories of hyperactivity, those of response inhibition deficit, working memory impairment and delay aversion. Confirming previous findings, genetic factors accounted for 50-70% of the variance in hyperactivity when considered as a continuous dimension. There was also significant evidence of genetic effects on extreme hyperactivity, although the present group heritability estimates were somewhat lower than previous estimates. The hyperactive group performed worse than the control group on the delay aversion measure and some of the working memory tasks. Controlling for IQ removed the significant group differences on the working memory measures, however. Although there were no significant group differences on the inhibition variables, the inhibition measure, stop task, produced evidence of a pattern of responding that was strongly characteristic of hyperactivity: hyperactive children were variable in their speed, generally slow and inaccurate. This pattern of responding may indicate a non-optimal effort/activation state. To investigate the possibility that the cognitive impairments or task engagement factors associated with hyperactivity mediate the genetic effects on the condition, bivariate group heritability analyses were carried out. There was significant evidence of shared genetic effects only on extreme hyperactivity and the variability of speed. The findings are interpreted as supporting the state regulation theory of hyperactivity. Although delay aversion is a characteristic of hyperactivity, it seems to have an environmental rather than a genetic origin.
158

Design and Performance Analysis of Opportunistic Routing Protocols for Delay Tolerant Networks

Abdel-kader, Tamer Ahmed Mostafa Mohammed January 2012 (has links)
Delay Tolerant Networks (DTNs) are characterized by the lack of continuous end-to-end connections because of node mobility, constrained power sources, and limited data storage space of some or all of its nodes. Applications of DTNs include vehicular networks and sensor networks in suburban and rural areas. The intermittent connection in DTNs creates a new and challenging environment that has not been tackled before in wireless and wired networks. Traditional routing protocols fail to deliver data packets because they assume the existence of continuous end-to-end connections. To overcome the frequent disconnections, a DTN node is required to store data packets for long periods of time until it becomes in the communication range of other nodes. In addition, to increase the delivery probability, a DTN node spreads multiple copies of the same packet on the network so that one of the copies reaches the destination. Given the limited storage and energy resources of DTN nodes, there is a trade off between maximizing delivery and minimizing storage and energy consumption. DTN routing protocols can be classified as either blind routing, in which no information is provided to select the next node in the path, or guided routing, in which some network information is used to guide data packets to their destinations. In addition they differ in the amount of overhead they impose on the network and its nodes. The objective of DTN routing protocols is to deliver as many packets as possible. Acquiring network information helps in maximizing packet delivery probability and minimizing the network overhead resulting from replicating many packet copies. Network information could be node contact times and durations, node buffer capacities, packet lifetimes, and many others. The more information acquired, the higher performance could be achieved. However, the cost of acquiring the network information in terms of delay and storage could be high to the degree that render the protocol impractical. In designing a DTN routing protocol, the trade-off between the benefits of acquiring information and its costs should be considered. In this thesis, we study the routing problem in DTN with limited resources. Our objective is to design and implement routing protocols that effectively handles the intermittent connection in DTNs to achieve high packet delivery ratios with lower delivery cost. Delivery cost is represented in terms of number of transmissions per delivered packet. Decreasing the delivery cost means less network overhead and less energy consumption per node. In order to achieve that objective, we first target the optimal results that could be achieved in an ideal scenario. We formulate a mathematical model for optimal routing, assuming the presence of a global observer that can collect information about all the nodes in the network. The optimal results provide us with bounds on the performance metrics, and show the room for improvement that should be worked on. However, optimal routing with a global observer is just a theoretical model, and cannot be implemented practically. In DTNs, there is a need for a distributed routing protocol which utilizes local and easily-collectable data. Therefore, We investigate the different types of heuristic (non-optimal) distributed routing protocols, showing their strengths and weaknesses. Out of the large collection of protocols, we select four protocols that represent different routing classes and are well-known and highly referred by others working in the same area. We implement the protocols using a DTN simulator, and compare their performance under different network and node conditions. We study the impact of changing the node buffer capacities, packet lifetimes, number of nodes, and traffic load on their performance metrics, which are the delivery ratio, delivery cost, and packet average delay. Based on these comparisons, we draw conclusions and guidelines to design an efficient DTN routing protocol. Given the protocol design guidelines, we develop our first DTN routing protocol, Eco-Friendly Routing for DTN (EFR-DTN), which combines the strengths of two of the previously proposed protocols to provide better delivery ratio with low network overhead (less power consumption). The protocol utilizes node encounters to estimate the route to destination, while minimizing the number of packet copies throughout the network. All current DTN routing protocols strive to estimate the route from source to destination, which requires collecting information about node encounters. In addition to the overhead it imposes on the network to collect this information, the time to collect this information could render the data worthless to propagate through the network. Our next proposal is a routing protocol, Social Groups Based Routing (SGBR), which uses social relations among network nodes to exclude the nodes that are not expected to significantly increase the probability of delivering the packet to its destination. Using social relations among nodes, detected from node encounters, every group of nodes can form a social group. Nodes belonging to the same social group are expected to meet each other frequently, and meet nodes from other groups less frequently. Spreading packet copies inside the same social group is found to be of low-added value to the carrying node in delivering a packet to its destination. Therefore, our proposed routing protocol spreads the packet copies to other social groups, which decreases the number of copies throughout the network. We compare the new protocol with the optimal results and the existing well-known routing protocols using real-life simulations. Results show that the proposed protocol achieves higher delivery ratio and less average delay compared to other protocols with significant reduction in network overhead. Finally, we discuss the willingness of DTN nodes to cooperate in routing services. From a network perspective, all nodes are required to participate in delivering packets of each other. From a node perspective, minimizing resource consumption is a critical requirement. We investigate the degree of fair cooperation where all nodes are satisfied with their participation in the network routing services. A new credit-based system is implemented to keep track of and reward node participation in packet routing. Results show that the proposed system improves the fairness among nodes and increases their satisfaction.
159

On Cyclic Delay Diversity OFDM Based Channels

Yousefi, Rozhin January 2012 (has links)
Orthogonal Frequency Division Multiplexing, so called OFDM, has found a prominent place in various wireless systems and networks as a method of encoding data over multiple carrier frequencies. OFDM-based communication systems, however, lacking inherent diversity, are capable of benefiting from different spatial diversity schemes. One such scheme, Cyclic Delay Diversity (CDD) is a method to provide spatial diversity which can be also interpreted as a Space-Time Block Coding (STBC) step. The main idea is to add more transmit antennas at the transmitter side sending the same streams of data, though with differing time delays. In [1], the capacity of a point-to-point OFDM-based channel with CDD is derived for inputs with Gaussian and discrete constellations. In this dissertation, we use the same approach for an OFDM-based single-input single-output (SISO) two-user interference channel (IC). In our model, at the receiver side, the interference is treated as noise. Moreover, since the channel is time-varying (slow-fading), the Shannon capacity in the strict sense is not well-defined, so the expected value of the instantaneous capacity is calculated instead. Furthermore, the channel coefficients are unknown to the transmitters. Thus, in this setting, the probability of outage emerges as a reasonable performance measure. Adding an extra antenna in the transmitters, the SISO IC turns into an MISO IC, which results in increasing the diversity. Both the continuous and discrete inputs are studied and it turns out that decoding interference is helpful in some cases. The results of the simulations for discrete inputs indicate that there are improvements in terms of outage capacity compared to the ICs with single-antenna transmitters.
160

Shock-Tube Study of Methane Ignition with NO2 and N2O

Pemelton, John 2011 August 1900 (has links)
NOx produced during combustion can persist in the exhaust gases of a gas turbine engine in quantities significant to induce regulatory concerns. There has been much research which has led to important insights into NOx chemistry. One method of NOx reduction is exhaust gas recirculation. In exhaust gas recirculation, a portion of the exhaust gases that exit are redirected to the inlet air stream that enters the combustion chamber, along with fuel. Due to the presence of NOx in the exhaust gases which are subsequently introduced into the burner, knowledge of the effects of NOx on combustion is advantageous. Contrary to general NOx research, little has been conducted to investigate the sensitizing effects of NO2 and N2O addition to methane/oxygen combustion. Experiments were made with dilute and real fuel air mixtures of CH4/O2/Ar with the addition of NO2 and N2O. The real fuel air concentrations were made with the addition of NO2 only. The equivalence ratios of mixtures made were 0.5, 1 and 2. The experimental pressure range was 1 - 44 atm and the temperature range tested was 1177 – 2095 K. The additives NO2 and N2O were added in concentrations from 831 ppm to 3539 ppm. The results of the mixtures with NO2 have a reduction in ignition delay time across the pressure ranges tested, and the mixtures with N2O show a similar trend. At 1.3 atm, the NO2 831 ppm mixture shows a 65% reduction and shows a 75% reduction at 30 atm. The NO2 mixtures showed a higher decrease in ignition time than the N2O mixtures. The real fuel air mixture also showed a reduction. Sensitivity Analyses were performed. The two most dominant reactions in the NO2 mixtures are the reaction O+H2 = O+OH and the reaction CH3+NO2 = CH3O+NO. The presence of this second reaction is the means by which NO2 decreases ignition delay time, which is indicated in the experimental results. The reaction produces CH3O which is reactive and can participate in chain propagating reactions, speeding up ignition. The two dominant reactions for the N2O mixture are the reaction O+H2 = O+OH and, interestingly, the other dominant reaction is the reverse of the initiation reaction in the N2O-mechanism: O+N2+M = N2O+M. The reverse of this reaction is the direct oxidation of nitrous oxide. The O produced in this reaction can then speed up ignition by partaking in propagation reactions, which was experimentally observed.

Page generated in 0.0386 seconds