Spelling suggestions: "subject:"fineline"" "subject:"hineline""
551 |
Mobile Location Estimation Using Genetic Algorithm and Clustering Technique for NLOS EnvironmentsHung, Chung-Ching 10 September 2007 (has links)
For the mass demands of personalized security services, such as tracking, supervision, and emergent rescue, the location technologies of mobile communication have drawn much attention of the governments, academia, and industries around the world. However, existing location methods cannot satisfy the requirements of low cost and high accuracy. We hypothesized that a new mobile location algorithm based on the current GSM system will effectively improve user satisfaction. In this study, a prototype system will be developed, implemented, and experimented by integrating the useful information such as the geometry of the cell layout, and the related mobile positioning technologies. The intersection of the regions formed by the communication space of the base stations will be explored. Furthermore, the density-based clustering algorithm (DCA) and GA-based algorithm will be designed to analyze the intersection region and estimate the most possible location of a mobile phone. Simulation results show that the location error of the GA-based is less than 0.075 km for 67% of the time, and less than 0.15 km for 95% of the time. The results of the experiments satisfy the location accuracy demand of E-911.
|
552 |
Single-band and Dual-band Beam Switching Systems and Offset-fed Beam Scanning ReflectarrayLee, Jungkyu 2012 May 1900 (has links)
The reflectarray has been considered as a suitable candidate to replace the conventional parabolic reflectors because of its high-gain, low profile, and beam reconfiguration capability. Beam scanning capability and multi-band operation of the microstrip reflectarray have been main research topics in the reflectarray design. Narrow bandwidth of the reflectarray is the main obstacle for the various uses of the reflectarray. The wideband antenna element with a large phase variation range and a linear phase response is one of the solutions to increase the narrow bandwidth of the reflectarray.
A four beam scanning reflectarray has been developed. It is the offset-fed microstrip reflectarray that has been developed to emulate a cylindrical reflector. Unlike other microstrip reflectarrays which integrates phase tuning devices such as RF MEMS switches and another phase shifters to the reflectarray elements and control the reflected phase, the beam scanning capability of the reflectarray is implemented by a phased array feed antenna. This method can reduce the complexity of the design of the beam switching reflectarray. A simple method has been investigated to develop multi-band elements in this dissertation. In approach to increase the coverage of the operation bands, a six-band reflectarray has been developed with two layers. Each layer covers three frequency bands.
A Butler matrix is one of the useful beamforming networks for a phased array antenna. A Double-Sided Parallel-Strip Line (DSPSL) is adapted for the feeding network of eight array elements. The DSPSL operate very well to feed the microstrip antenna array over the bandwidth to reduce the sidelobe level and a high gain. In another topic of a Butler matrix, a dual-band Butler matrix has been proposed for multi-band applications. A modified Butler matrix is used to reduce a size and a sidelobe level.
The bandwidth of the microstrip antenna is inherently small. A broadband circularly polarized microstrip antenna with dual-offset feedlines is introduced in this dissertation. Aperture-coupled feed method is used to feed the stacked patch antennas and a slotcoupled
directional coupler is used for the circularly polarized operation.
The research presented in this dissertation suggests useful techniques for a beam scanning microstrip reflectarray, phased array antenna, and wideband antenna designs in the modern wireless communication systems.
|
553 |
Ranking line-depth ratios for determining relative star temperatures in dwarfsEdstam, Louise January 2013 (has links)
The central line-depths of absorption lines depend upon stellar temperature. By dividing the central line-depth of such a line with a central line-depth independent of temperature, a thermometer of relative star temperatures is obtained in the form of a line-depth ratio (LDR), once it is related to an effective temperature scale. Such thermometers are known to give precise results which is why the method is pursued. The purpose of this work is to rank LDRs according to a set of criteria to find the most suitable ratio to measure temperature. This is done based on a set of LDRs measured for a large sample of dwarf stars with known effective temperature, atmospheric pressure and chemical composition. Numerous LDRs are eliminated because their temperature dependence are limited to a short temperature interval. Further LDRs are eliminated due to dependence on the atmospheric pressure and chemical composition of the LDR. The remaining LDRs are ranked based on the strength of temperature dependence, the fit of the representative polynomial to the data points and the number of data points available. The best ranked LDR provides a temperature resolution smaller than 10 K over a temperature interval of 4500-6250 K, assuming an uncertainty in LDR of 0.01.
|
554 |
In-line Extrusion Monitoring and Product QualityFarahani Alavi, Forouzandeh 15 September 2011 (has links)
Defects in polyethylene film are often caused by contaminant particles in the polymer melt. In this research, particle properties obtainable from in-line melt monitoring, combined with processing information, are used to predict film defect properties.
“Model” particles (solid and hollow glass microspheres, aluminum powder, ceramic microspheres, glass fibers, wood particles, and cross-linked polyethylene) were injected into low-density polyethylene extruder feed. Defects resulted when the polyethylene containing particles was extruded through a film die and stretched by a take-up roller as it cooled to form films 57 to 241mm in thickness.
Two off-line analysis methods were further developed and applied to the defects: polarized light imaging and interferometric imaging. Polarized light showed residual stresses in the film caused by the particle as well as properties of the embedded particle. Interferometry enabled measures of the film distortion, notably defect volume. From the images, only three attributes were required for mathematical modeling: particle area, defect area, and defect volume. These attributes yielded two ”primary defect properties”: average defect height and magnification (of particle area). For all spherical particles, empirical correlations of these properties were obtained for each of the two major types of defects that emerged: high average height and low average height defects. Analysis of data for non-spherical particles was limited to showing how, in some cases, their data differed from the spherical particle correlations.
To help explain empirical correlations of the primary defect properties with film thickness, a simple model was proposed and found to be supported by the high average height defect data: the “constant defect volume per unit particle area” model. It assumes that the product of average defect height and magnification is a constant for all film thicknesses.
A numerical example illustrates how the methodology developed in this work can be used as a starting point for predicting film defect properties in industrial systems. A limitation is that each prediction yields two pairs of primary defect property values, one pair for each defect type. If it is necessary to identify the dominant type, then measurement of a length dimension of sufficient defects in the film is required.
|
555 |
Timed power line data communicationAckerman, Kevin W 17 February 2005
<p>With the ever increasing demand for data communication methods, power line communication has become an interesting alternative method for data communication. Power line communication falls into two categories: one for data transmission between sites in the power grid and the other for home or office networking. When considering home or office networking, existing methods are either too slow for tasks other than simple automation, or are very fast with a higher cost than necessary for the desired function. The objective in this work is to develop a lower cost communication system with an intermediate data transmission rate.</p><p>At first glance, power line communication looks like a good option because of the availability of power outlets in every room of a building. However, the power conductors were installed solely for the purpose of distributing 60 Hz mains power and, for data signals, they exhibit very high attenuation, variable impedance and there is radio frequency shielding. Furthermore, many of the 60 Hz loads produce radio frequency interference that impedes data communication. Previous research has shown that much of the noise is time synchronous with the 60 Hz mains frequency and that the majority of data errors occur during these periods of high noise.</p><p>
This work develops a power line communication protocol that coordinates transmissions and uses only the predictable times of lower noise. Using a central control strategy, the power line 60 Hz mains signal is divided into 16 timeslots and each timeslot is monitored for errors. The central controller periodically polls all stations to learn which timeslots have low noise and it then controls all transmissions to make the best use of these good timeslots. The periodic polling allows the system to adapt to changes in electrical loading and noise. This control strategy has been achieved with modest complexity and laboratory measurements have shown throughput approaching 70% of the modem bit rate.</p>
|
556 |
Transmission loss allocation using artificial neural networksHaque, Rezaul 07 April 2006
The introduction of deregulation and subsequent open access policy in electricity sector has brought competition in energy market. Allocation of transmission loss has become a contentious issue among the electricity producers and consumers. A closed form solution for transmission loss allocation does not exist due to the fact that transmission loss is a highly non-linear function of system states and it is a non-separable quantity. In absence of a closed form solution different utilities use different methods for transmission loss allocation. Most of these techniques involve complex mathematical operations and time consuming computations. A new transmission loss allocation tool based on artificial neural network has been developed and presented in this thesis. The proposed artificial neural network computes loss allocation much faster than other methods. A relatively short execution time of the proposed method makes it a suitable candidate for being a part of a real time decision making process. Most independent system variables can be used as inputs to this neural network which in turn makes the loss allocation procedure responsive to practical situations. Moreover, transmission line status (available or failed) was included in neural network inputs to make the proposed network capable of allocating loss even during the failure of a transmission line. The proposed neural networks were utilized to allocate losses in two types of energy transactions: bilateral contracts and power pool operation. Two loss allocation methods were utilized to develop training and testing patterns; the Incremental Load Flow Approach was utilized for loss allocation in the context of bilateral transaction and the Z-bus allocation was utilized in the context of pool operation. The IEEE 24-bus reliability network was utilized to conduct studies and illustrate numerical examples for bilateral transactions and the IEEE 14-bus network was utilized for pool operation. Techniques were developed to expedite the training of the neural networks and to improve the accuracy of results.
|
557 |
Task Re-allocation Methodologies for Teams of Autonomous Agents in Dynamic EnvironmentsSheridan, Patricia Kristine 25 August 2011 (has links)
Two on-line task re-allocation methodologies capable of re-allocating agents to tasks on-line for minimum task completion time in dynamic environments are presented herein. The first methodology, the Dynamic Nearest Neighbour (DNN) Policy, is proposed for the operation of a fleet of vehicles in a city-like application of the dial-a-ride problem. The second methodology, the Dynamic Re-Pairing Methodology (DRPM) is proposed for the interception of a group of mobile targets by a dynamic team of robotic pursuers, where the targets are assumed to be highly maneuverable with a priori unknown, but real-time trackable, motion trajectories.
Extensive simulations and experiments have verified the DNN policy to be tangibly superior to the first-come-first-served and nearest neighbour policies in minimizing customer mean system time, and the DRPM to be tangibly efficient in the optimal dynamic re-pairing of multiple mobile pursuers to multiple mobile targets for minimum total interception time.
|
558 |
Task Re-allocation Methodologies for Teams of Autonomous Agents in Dynamic EnvironmentsSheridan, Patricia Kristine 25 August 2011 (has links)
Two on-line task re-allocation methodologies capable of re-allocating agents to tasks on-line for minimum task completion time in dynamic environments are presented herein. The first methodology, the Dynamic Nearest Neighbour (DNN) Policy, is proposed for the operation of a fleet of vehicles in a city-like application of the dial-a-ride problem. The second methodology, the Dynamic Re-Pairing Methodology (DRPM) is proposed for the interception of a group of mobile targets by a dynamic team of robotic pursuers, where the targets are assumed to be highly maneuverable with a priori unknown, but real-time trackable, motion trajectories.
Extensive simulations and experiments have verified the DNN policy to be tangibly superior to the first-come-first-served and nearest neighbour policies in minimizing customer mean system time, and the DRPM to be tangibly efficient in the optimal dynamic re-pairing of multiple mobile pursuers to multiple mobile targets for minimum total interception time.
|
559 |
Rectilinear Interdiction Problem By Locating A Line BarrierGharehmeshk Gharravi, Hossein 01 January 2013 (has links) (PDF)
This study is an optimization approach to the rectilinear interdiction problem by locating a line barrier.
Interdiction problems study the eect of a limited disruption action on operations of a system. Network interdiction problems, where nodes and arcs of the network are susceptible to disruption actions, are extensively studied in the operations research literature. In this study, we consider a set of sink points on the plane that are being served by source points and our aim is to study the eect of locating a line barrier on the plane (as a disruption action) such that the total shortest distance between sink and source points is maximized. We compute the shortest distances after disruption using visibility concept and utilizing properties of our problem. The amount of disruption is limited by imposing constraints on the
length of the barrier and also the total number of disrupted points. The suggested solution approaches are based on mixed-integer programming and a polynomial-time algorithm.
|
560 |
In-line Extrusion Monitoring and Product QualityFarahani Alavi, Forouzandeh 15 September 2011 (has links)
Defects in polyethylene film are often caused by contaminant particles in the polymer melt. In this research, particle properties obtainable from in-line melt monitoring, combined with processing information, are used to predict film defect properties.
“Model” particles (solid and hollow glass microspheres, aluminum powder, ceramic microspheres, glass fibers, wood particles, and cross-linked polyethylene) were injected into low-density polyethylene extruder feed. Defects resulted when the polyethylene containing particles was extruded through a film die and stretched by a take-up roller as it cooled to form films 57 to 241mm in thickness.
Two off-line analysis methods were further developed and applied to the defects: polarized light imaging and interferometric imaging. Polarized light showed residual stresses in the film caused by the particle as well as properties of the embedded particle. Interferometry enabled measures of the film distortion, notably defect volume. From the images, only three attributes were required for mathematical modeling: particle area, defect area, and defect volume. These attributes yielded two ”primary defect properties”: average defect height and magnification (of particle area). For all spherical particles, empirical correlations of these properties were obtained for each of the two major types of defects that emerged: high average height and low average height defects. Analysis of data for non-spherical particles was limited to showing how, in some cases, their data differed from the spherical particle correlations.
To help explain empirical correlations of the primary defect properties with film thickness, a simple model was proposed and found to be supported by the high average height defect data: the “constant defect volume per unit particle area” model. It assumes that the product of average defect height and magnification is a constant for all film thicknesses.
A numerical example illustrates how the methodology developed in this work can be used as a starting point for predicting film defect properties in industrial systems. A limitation is that each prediction yields two pairs of primary defect property values, one pair for each defect type. If it is necessary to identify the dominant type, then measurement of a length dimension of sufficient defects in the film is required.
|
Page generated in 0.049 seconds