• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 639
  • 132
  • 64
  • 63
  • 15
  • 15
  • 14
  • 12
  • 10
  • 10
  • 4
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 1196
  • 141
  • 133
  • 120
  • 91
  • 87
  • 77
  • 72
  • 72
  • 70
  • 70
  • 70
  • 69
  • 68
  • 62
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Capacity Scaling and Optimal Operation of Wireless Networks

Ghaderi Dehkordi, Javad 15 July 2008 (has links)
How much information can be transferred over a wireless network and what is the optimal strategy for the operation of such network? This thesis tries to answer some of these questions from an information theoretic approach. A model of wireless network is formulated to capture the main features of the wireless medium as well as topology of the network. The performance metrics are throughput and transport capacity. The throughput is the summation of all reliable communication rates for all source-destination pairs in the network. The transport capacity is a sum rate where each rate is weighted by the distance over which it is transported. Based on the network model, we study the scaling laws for the performance measures as the number of users in the network grows. First, we analyze the performance of multihop wireless network under different criteria for successful reception of packets at the receiver. Then, we consider the problem of information transfer without arbitrary assumptions on the operation of the network. We observe that there is a dichotomy between the cases of relatively high signal attenuation and low attenuation. Moreover, a fundamental relationship between the performance metrics and the total transmitted power of users is discovered. As a result, the optimality of multihop is demonstrated for some scenarios in high attenuation regime, and better strategies than multihop are proposed for the operation in the low attenuation regime. Then, we study the performance of a special class of networks, random networks, where the traffic is uniformly distributed inside the networks. For this special class, the upperbounds on the throughput are presented for both low and high attenuation cases. To achieve the presented upperbounds, a hierarchical cooperation scheme is analyzed and optimized by choosing the number of hierarchical stages and the corresponding cluster sizes that maximize the total throughput. In addition, to apply the hierarchical cooperation scheme to random networks, a clustering algorithm is developed, which divides the whole network into quadrilateral clusters, each with exactly the number of nodes required.
82

Skalning och brusberäkning av tvåportsadaptorer / Scaling and noisecalculation of twoportadaptor

Samuelsson, Daniel January 2004 (has links)
The goal of this work is to summarize the calculations for scaling and noise of twoportadaptor. Two different methods has been described and used for the final results.
83

Impact of Technology Scaling on Leakage Reduction Techniques

Ghafari, Payam January 2007 (has links)
CMOS technology is scaling down to meet the performance, production cost, and power requirements of the microelectronics industry. The increase in the transistor leakage current is one of the most important negative side effects of technology scaling. Leakage affects not only the standby and active power consumption, but also the circuit reliability, since it is strongly correlated to the process variations. Leakage current influences circuit performance differently depending on: operating conditions (e.g., standby, active, burn in test), circuit family (e.g., logic or memory), and environmental conditions (e.g., temperature, supply voltage). Until the introduction of high-K gate dielectrics in the lower nanometer technology nodes, gate leakage will remain the dominant leakage component after subthreshold leakage. Since the way designers control subthreshold and gate leakage can change from one technology to another, it is crucial for them to be aware of the impact of the total leakage on the operation of circuits and the techniques that mitigate it. Consequently, techniques that reduce total leakage in circuits operating in the active mode at different temperature conditions are examined. Also, the implications of technology scaling on the choice of techniques to mitigate total leakage are investigated. This work resulted in guidelines for the design of low-leakage circuits in nanometer technologies. Logic gates in the 65nm, 45nm, and 32nm nodes are simulated and analyzed. The techniques that are adopted for comparison in this work affect both gate and subthreshold leakage, namely, stack forcing, pin reordering, reverse body biasing, and high threshold voltage transistors. Aside from leakage, our analysis also highlights the impact of these techniques on the circuit's performance and noise margins. The reverse body biasing scheme tends to be less effective as the technology scales since this scheme increases the band to band tunneling current. Employing high threshold voltage transistors seems to be one of the most effective techniques for reducing leakage with minor performance degradation. Pin reordering and natural stacks are techniques that do not affect the performance of the device, yet they reduce leakage. However, it is demonstrated that they are not as effective in all types of logic since the input values might switch only between the highly leaky states. Therefore, depending on the design requirements of the circuit, a combination, or hybrid of techniques which can result in better performance and leakage savings, is chosen. Power sensitive technology mapping tools can use the guidelines found as a result of the research in the low power design flow to meet the required maximum leakage current in a circuit. These guidelines are presented in general terms so that they can be adopted for any application and process technology.
84

Variability-Aware Design of Static Random Access Memory Bit-Cell

Gupta, Vasudha January 2008 (has links)
The increasing integration of functional blocks in today's integrated circuit designs necessitates a large embedded memory for data manipulation and storage. The most often used embedded memory is the Static Random Access Memory (SRAM), with a six transistor memory bit-cell. Currently, memories occupy more than 50% of the chip area and this percentage is only expected to increase in future. Therefore, for the silicon vendors, it is critical that the memory units yield well, to enable an overall high yield of the chip. The increasing memory density is accompanied by aggressive scaling of the transistor dimensions in the SRAM. Together, these two developments make SRAMs increasingly susceptible to process-parameter variations. As a result, in the current nanometer regime, statistical methods for the design of the SRAM array are pivotal to achieve satisfactory levels of silicon predictability. In this work, a method for the statistical design of the SRAM bit-cell is proposed. Not only does it provide a high yield, but also meets the specifications for the design constraints of stability, successful write, performance, leakage and area. The method consists of an optimization framework, which derives the optimal design parameters; i.e., the widths and lengths of the bit-cell transistors, which provide maximum immunity to the variations in the transistor's geometry and intrinsic threshold voltage fluctuations. The method is employed to obtain optimal designs in the 65nm, 45nm and 32nm technologies for different set of specifications. The optimality of the resultant designs is verified. The resultant optimal bit-cell designs in the 65nm, 45nm and 32nm technologies are analyzed to study the SRAM area and yield trade-offs associated with technology scaling. In order to achieve 50% scaling of the bit-cell area, at every technology node, two ways are proposed. The resultant designs are further investigated to understand, which mode of failure in the bit-cell becomes more dominant with technology scaling. In addition, the impact of voltage scaling on the bit-cell designs is also studied.
85

Capacity Scaling and Optimal Operation of Wireless Networks

Ghaderi Dehkordi, Javad 15 July 2008 (has links)
How much information can be transferred over a wireless network and what is the optimal strategy for the operation of such network? This thesis tries to answer some of these questions from an information theoretic approach. A model of wireless network is formulated to capture the main features of the wireless medium as well as topology of the network. The performance metrics are throughput and transport capacity. The throughput is the summation of all reliable communication rates for all source-destination pairs in the network. The transport capacity is a sum rate where each rate is weighted by the distance over which it is transported. Based on the network model, we study the scaling laws for the performance measures as the number of users in the network grows. First, we analyze the performance of multihop wireless network under different criteria for successful reception of packets at the receiver. Then, we consider the problem of information transfer without arbitrary assumptions on the operation of the network. We observe that there is a dichotomy between the cases of relatively high signal attenuation and low attenuation. Moreover, a fundamental relationship between the performance metrics and the total transmitted power of users is discovered. As a result, the optimality of multihop is demonstrated for some scenarios in high attenuation regime, and better strategies than multihop are proposed for the operation in the low attenuation regime. Then, we study the performance of a special class of networks, random networks, where the traffic is uniformly distributed inside the networks. For this special class, the upperbounds on the throughput are presented for both low and high attenuation cases. To achieve the presented upperbounds, a hierarchical cooperation scheme is analyzed and optimized by choosing the number of hierarchical stages and the corresponding cluster sizes that maximize the total throughput. In addition, to apply the hierarchical cooperation scheme to random networks, a clustering algorithm is developed, which divides the whole network into quadrilateral clusters, each with exactly the number of nodes required.
86

A study of calcium carbonate crystal growth in the presence of a calcium complexing agent

Trainer, Denise R. (Denise Ruth) 01 June 1981 (has links)
No description available.
87

Effects of membrane fouling on the operation of low pressure reverse osmosis system for water treatment

Tsai, Wen-Chin 27 August 2012 (has links)
The tap-water treated by water treatment plants in southern Taiwan is coming from surface water of the rivers, subsurface stream and underground water of deep wells. The original raw water possesses were high level of hardness and ammonia- nitrogen solute due to affection by terrain, geology and human activities within water origin area. And considering the water quality from Kao-ping river origin is hard to control during in rain fall and dry season, we were to construction efficiency procedure of water treatment to obtain a high quality of drink water. There were high hardness and TDS from strata limestone of groundwater to increase treatment difficulty in southern Taiwan water treatment plant. Therefore, recommended that the influent water standards were limited hardness and silicate (SiO2) less than 300 mg/L and 15 mg/L, respectively. On the other hand, the metal substances Ca, Mg, Si and Al in influent water were 74.3 mg/L, 18.7 mg/L, 12.9 mg/L and 0.1 mg/L, respectively. Results show high inorganic substances that could increase the treatment loading. This project of the study, were make sure the problem of membrane clogging and fouling happened to the finest water treatment plants who use LPRO membrane system to remove the impurity in the influent water. Moreover, by accumulation of processes operation experience on site were according to water quality statistics data and membrane autopsy of single LPRO membrane by processes. In the same time, we prepared three single tube of RO membrane to experiment on site and collected data from before and after antifouling additive, that could find the membrane fouling and clogging results of the influent raw water. Obviously, the influent raw water quality into LPRO membrane is closely connected to the efficiency of treatment plant. The results show when the temperature decreased of influent raw water that could decrease the effluent volume from LPRO, because the water temperature affected by increase viscosity of raw water. The first part clogging substances of membrane were aluminum (Al), that could be use aluminate coagulant to make increase more aluminum. And the TOC value of the effluent were from 0.2 to 1.4 mg/L, that shows the effluent water was kept stably but UV254 value were have more than 75% efficiency. Results of organic analysis on LPRO effluent indicates the pretreatment process could leave annular structure organic. In other hand, when using EEFM to analysis the spectra sampling of organic of LPRO, there finding a lot low emission wavelength fluoresces of influent on EX/EM 230/340 and decrease the wavelength fluoresces value on EX/EM 280/330 and 240/340 by RO membrane system of LPRO. To be worth mentioning, when organism of sampling fluoresces value during high emission wavelength on EX/EM 240/400 nm was disappeared, that indicates RO membrane has good performance to separation organism of water. And results of elements analysis on RO membrane surface were using SEM and EDX have a lot aluminum and silicate on segment RO membrane module. Therefore, results show pretreatment process of coagulation and sedimentation could not treatment metal substances and organic efficiency, that was to effect directly cause to make the problems of membrane fouling and clogging.
88

Implementation and Design of a Cycle-Efficient 64b/32b Integer Divider Using a Table-Sharing Method

Wang, Jun-Jie 15 June 2001 (has links)
The first topic of this thesis is a mixed radix-16/8/4/2 64b/32b integer divider which uses a variety of techniques, including operand scaling, table partitioning, and table sharing, to increase performance without paying the cost of increasing complexity. The second topic is a noise immune address transition detector¡]ATD¡^circuit. We employ a simple feedback loop to stabilize the generated CS¡]chip select¡^signal and two delay cells to dynamically adjust the width of the CS strobe.
89

A Study of Market Segmentation and Positioning on Industrial Furnace System Integration from A Global Perspective

Lee, Jui-Kuo 28 June 2002 (has links)
Industrial furnace is very important production equipment for industrial enterprises. Its performance will directly affect the efficiency and effectiveness of enterprises in manufacturing industry, even change the competitive advantage of the organization. To achieve more high quality and synergy, there is a need to have an integrated industrial furnace system that may compose of industrial furnaces, network, computer automatic control, intelligent software, and decision support technical. The integrated industrial furnace system takes the place of the traditionally single and dummy industrial furnace gradually. The importance of the integrated industrial furnace system to the enterprise is increased dramatically. With the change of market focus from production-orientation to customer-orientation, more and more companies realize the importance of the strategy of market segment and position. ¡§The small market segment¡¨ and ¡§unease of standardization¡¨ (high customization) are major characteristics in the industrial furnace industry. Therefore, the market and position strategy is the critical successful factor for the furnace industry in addition to technical excellence. Taiwanese industrial furnace companies are becoming global business recently. This research will focus on exploring the market and position strategy of furnace industry from a global perspective. This research will combine qualitative and quantitative methods. First, we will adopt literature review, in-depth interview, and focus-group interview with suppliers, customers, scholars, market analysts, and economic officials to develop two measure instruments: the supplier strategy measure and customer preference and selection criterion measure. Second, we will survey via questionnaires, and test reliabilities and validities of the both measures conscientiously. Finally, we formally mail validated questionnaires to suppliers and customers in the industrial furnaces industry. Based on multivariate analysis, factor analysis, cluster analysis, and multi-dimensional scaling, we will provide insights and suggestion about integrated industrial furnace system industry: 1.The strategy patterns of industrial furnace suppliers 2.The selection criterion of industrial furnace in a broad customer perspective 3.The grouping of customers and supplier in the integrated industrial furnace system industry. 4.The perceptual map of major suppliers in the integrated industrial furnace system industry 5.The strategy, market segment, and position current analysis and future suggestions
90

Displaying and Scaling Color Images on Hexagonal Grids

Liu, Che-Wei 04 July 2002 (has links)
Images scaling is a very common capability on rectangular grid. Similarly, its development on hexagonal grid is very fundamental and necessary. In this paper, we developed new techniques to scale digital image with resampling and without resampling. From now on, we haven¡¦t seen any research about color displaying on hexagonal grid. It limited the application of hexagonal grid displaying device. In this paper, we developed a color displaying system on hexagonal grid by using symmetrical triangular frame.

Page generated in 0.0294 seconds