• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1935
  • 582
  • 217
  • 207
  • 182
  • 164
  • 70
  • 55
  • 49
  • 39
  • 38
  • 31
  • 19
  • 15
  • 12
  • Tagged with
  • 4422
  • 560
  • 456
  • 319
  • 315
  • 294
  • 286
  • 265
  • 204
  • 198
  • 198
  • 187
  • 178
  • 169
  • 167
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
601

Mercury accumulation of yellowfin tuna, Thunnus albacores, in Seychelles, Indian Ocean

Li, Hsin-hsien 06 September 2010 (has links)
Ninty three yellowfin tuna, Thunnus albacares, the fork length ranged from 80 to 168 cm were collected from the waters around Seychelles by two longline fishing vessels from April to December in 2006. The muscle and liver samples were analyzed for total mercury (THg) and organic mercury (OHg) concentrations.The concentrations of THg and OHg of the muscle were similar to previous studies. The concentrations of THg and OHg form the muscles and livers were positive-linearly regressed with the fish of Fork Length larger than 113 cm (big fish group), but only THg concentration of muscle was negative- linearly regressed 80- 112 cm (small fish group). Such patterns were first found in yellowfin tuna. It might be related to the ¡§growth rate¡¨ . Only one THg concentration of liver were over the standard set by the European Commission Decision (1 mg / kg THg wet wt.), other samples were in accordance with standard set by the European Commission Decision and the US-FDA food safty standard (1 mg / kg MeHg wet wt.). According to the dietary recommendations set by the Department of Health, Executive Yuan, yellowfin tuna can replace 86% animal protein source per week of people.
602

Study on Forging and Thread-rolling Processes of Magnesium Alloy Screws

Huang, Kai-neng 29 August 2011 (has links)
This study investigated effects of the process parameters on the forging load and metal flow pattern during forging and thread-rolling these two process of LZ91 magnesium alloy small size screw by the finite element analysis. At first, Compression tests were carried out under various forming temperatures to study the flow stress. Then, FEM software DEFORM-2D is adopted to simulate forging and thread-rolling processes of small screw to analyze the formability and parameters. In one of this study, there are two stages in forging process, and found out that up-die velocity, temperatures and friction factors will affect the product quality and appearance; on the other part, it investigated the effect of friction factor and temperature under thread-rolling process, and found out that effective stress, effective strain, metal flow and height of thread will be affected. In addition, conduct forging and thread-rolling experiments by using universal testing machine with the mold self-designed, and MoS2 of lubricant, and comparing the analytical results to verify the suitability and accuracy of FEM for forging process. Finally, according to the analysis result of this study, engineers can take it as reference.
603

Implementations of Dynamic End-to-End Bit-rate Adjustments for Intelligent Video Surveillance Networks

Tsai, YueLin 17 January 2012 (has links)
In the Thesis, we propose a mechanism to dynamically adjust video parameters in an intelligent video surveillance network. Whenever there is an alarm or network encounters congestion, we could adjust video parameters including Frames per Second (FPS), Quality, and Picture Size to adapt to network bandwidth. For examples, we can adjust FPS when an alarm exists in the surveillance system; we can adjust the Quality or Picture Size by counting the total number of video packets received per second to obtain a smooth video when network is congested To demonstrate the proposed schemes, we implement these three adjustable parameters, Quality, Picture size, and FPS on a Linux platform. To do this, we establish a new HTTP connection from a client to a camera and then we develop the corresponding control messages issued by the client in order to change the video parameters. In addition, we implement a video recovery mechanism by measuring the differences in arrival time between every packet (referred to as diff). Finally, we observe with our proposed scheme whether the video quality can be smoother under different background traffics. In the video recovery mechanism, we utilize diff to decide whether a higher quality picture should be persisted or downgraded to a lower quality picture to avoid packet loss under network congestion.
604

How Simple Product Design Affects Consumer Responses

CHANG, CHIA-CHIEH 31 January 2012 (has links)
Product design affects many aspects of people¡¦s life. This research use qualitative and quantitative methods, and focus on how simple design cause different consumer responses. First of all, we process a content analysis aiming for household and digital product, and then we conduct the definition and characteristics of simple design. Second, we use experimental design to figure out the pattern of consumer responses to product design both psychological and physical. For psychological responses, we observe the consumer expectation and satisfaction in product appearance, assortment size and functional information; we also exam the different decision making tendency (Maximizer & Satisficer) in consumer approach behavior. According to our research, we conclude that the required elements of simple design are (1) Single Color, (2) Unique Personality, (3) Simple Shapes, (4) Practical Function, (5) Easy to Use, (6) Match, (7) Materials, (8) Aesthetics and (9) Culture & Emotion. For product external appearance, there is high expectation for simple design, and also satisfaction still has a big room to improve. To be more specific, in the aspects of attention drawing, unique symbol and ergonomic is the biggest gap between expectation and satisfaction. In the part of assortment size, simple design causes a higher expectation when the size is large. However, satisfaction did not drop as previous studies suggested, it remains indifferent which could be the suggestion for future product development. In function information, it plays a important role in digital product which means mainly simple designed appearance can only achieve limited benefits. In behavioral response, satisfaction and approach behavior have positive relation, and the responses of store are apparently stronger than the responses of single product. In different decision making tendency, product personality, attention drawing and assortment size are significant, but there is no clear result for function information.
605

Spary Droplet Diameter and Flowfield Characteristic Analysis

Jheng, Qiao-Hong 06 August 2012 (has links)
The aim of this study was to observe the properties of a spray field, with micro particle image velocimetry (£gPIV) and holographic interferometric particle imaging (IPI) employed for the imaging and analysis of the global spray field. The experiment adopted different nozzle diameters (dj = 200 £gm, and dj = 500 £gm) and different values of gauge pressure (£GP = 300 kPa, £GP = 500 kPa, and £GP = 700 kPa) as the main parameters, and DI (distilled) Water as the working medium. The study was divided into two parts. The first part used the £gPIV system to observe the two-dimensional global visualization of spray field distribution and spray angle from each nozzle under different values of gauge pressure (£GP). The flow velocity distribution and variations (axial velocity, and impact velocity) of the global spray frame were also measured. As the nozzle diameter would determine the distribution of spray droplets, the second part adopted the IPI system to measure and explore the atomized droplet sizes from each nozzle under different values of gauge pressure (£GP), whereby drop size histograms were created through statistical analysis.
606

The effect of grain size on the formation of deformation twins in AZ31 alloy

Tsai, Meng-Shu 11 September 2012 (has links)
Compression tests along the rolling and normal direction of AZ31B plate materials under 10 s strain rate were performed at room temperature to understand the effect of grain size on the formation of deformation twins. When compressed along the rolling direction, tension twins were formed in bands. Within the twin bands, nearly all grains contained tension twins, irrespective of grain size. And outside the bands, no twin was found. Under this deformation condition, grain size has no effect on the formation of tension twins. The reason for this is due to the fact that the formation of a tension twin can trigger the formation of tension twin in the neighboring grain, irrespective of the neighboring grain size. When compressed along the normal direction, no twin band was formed, and compression twins were formed evenly in the specimens. Under this deformation condition, it was found that the larger the grain size, the higher the fraction of grains which contained compression twins. This result indicates that compression twins are easier to be formed in the large grains.
607

Cross-layer Cooperative Transmission scheme in Mobile Wireless Networks

Yang, Kai-Ting 23 November 2012 (has links)
Driven by the ambition for ubiquitous networking, wireless networks had gained substantial technical advances in recent years. Using radio signals in air as data links, wireless networks can get rid of the tangling of wired cables. However, due to the inherent limitations of wireless channels and legacy protocol design, users of wireless networks today still suffer from the problems on low bandwidth and high error rates. The seven-layer Open System Interconnection (OSI) model was originally designed with wired network environments in mind. Following the seven-layer OSI model, each layer is responsible for handling specific tasks without communicating with each other. Due to the relative stability of wired channels, the strictly-layered approach works well in wired network environments. However, its adequacy is a controversy in wireless environments, since wireless networks have completely different characteristics from its wired counterparts. In wireless environments, channel conditions are highly time-varying and are affected by many factors. External interference or signal degradation may lead to severe packet loss. Even signal-to-noise ratios are fine, transmissions may still fail due to collisions when contention-based MAC protocols are adopted. Conventional protocols developed with wired network environments in mind cannot appropriately response to the characteristics of wireless channels and may make wrong reactions. For these reasons, a flexible framework to capture the rapid change conditions of wireless channels and respond to them immediately is necessary. In this dissertation, we design a cross-layer framework with the consideration of wireless network characteristics. By the coordination between the involved layers, the cross-layer framework can adapt to wireless channel conditions and significantly improve QoS in wireless networks. In order to reduce collision probabilities in wireless networks, we propose a novel protocol named Wait-and-Transmit, which effectively alleviates contentions in wireless networks. By reducing collision probabilities of wireless networks, transmission delays can be shortened and throughputs can be significantly improved. Aiming at the transmission paths containing at least one wireless link, a flexible and efficient cross-layer transmission scheme is also present in this dissertation, which separates the rapid change conditions such as collision probabilities from the relatively stable conditions and well responds to these changes. The proposed approaches significantly improve the performance of wireless networks. We believe that these approaches can contribute to the development of wireless networking.
608

Electrical conductivity of segregated network polymer nanocomposites

Kim, Yeon Seok 02 June 2009 (has links)
A set of experiments was designed and performed to gain a fundamental understanding of various aspects of the segregated network concept. The electrical and mechanical properties of composites made from commercial latex and carbon black are compared with another composite made from a polymer solution. The percolation threshold of the emulsion-based composite is nearly one order of magnitude lower than that of the solution-based composite. The segregated network composite also shows significant improvement in both electrical and mechanical properties with low carbon black loading, while the solution-based composite achieves its maximum enhancement at higher carbon black loading (~25wt%). The effect of the particle size ratio between the polymer particle and the filler was also studied. In order to create a composite with an extremely large particle size ratio (> 80,000), layer-by-layer assembly was used to coat large polyethylene particles with the carbon black. Hyper-branched polyethylenimine was covalently grafted to the surface of polyethylene to promote the film growth. The resulting composite has a percolation threshold below 0.1 wt%, which is the lowest percolation threshold ever reported for a carbon-filled composite. Theoretical predictions suggest that the actual percolation threshold may be lower than 0.002 wt%. Finally, the effect of the emulsion polymer modulus on the segregated network was studied. Monodispersed emulsions with the different glass transition temperature were used as the matrix. The composites made using the emulsion with higher modulus show lower percolation threshold and higher conductivity. Higher modulus causes tighter packing of carbon black between the polymer particles. When the drying temperature was increased to 80°C, the percolation thresholds became closer between some systems because their moduli were very close. This work suggests modulus is a variable that can be used to tailor percolation threshold and electrical conductivity, along with polymer particle size.
609

Prediciting Size Effects and Determing Length Scales in Small Scale Metaliic Volumes

Faruk, Abu N. 2010 May 1900 (has links)
The purpose of this study is to develop an understanding of the behavior of metallic structures in small scales. Structural materials display strong size dependence when deformed non-uniformly into the inelastic range. This phenomenon is widely known as size effect. The primary focus of this study is on developing analytical models to predict some of the most commonly observed size effects in structural metals and validating them by comparing with experimental results. A nonlocal rate-dependent and gradient dependent theory of plasticity on a thermodynamically consistent framework is adopted for this purpose. The developed gradient plasticity theory is applied to study size effects observed in biaxial and thermal loading of thin films and indentation tests. One important intrinsic material property associated with this study is material length scale. The work also presents models for predicting length scales and discusses their physical interpretations. It is found that the proposed theory is successful for the interpretation of indentation size effects in micro/nano-hardness when using pyramidal or spherical indenters and gives sound interpretation of the size effects in thin films under biaxial or thermal loading.
610

Investigating the Effects of Sample Size, Model Misspecification, and Underreporting in Crash Data on Three Commonly Used Traffic Crash Severity Models

Ye, Fan 2011 May 1900 (has links)
Numerous studies have documented the application of crash severity models to explore the relationship between crash severity and its contributing factors. These studies have shown that a large amount of work was conducted on this topic and usually focused on different types of models. However, only a limited amount of research has compared the performance of different crash severity models. Additionally, three major issues related to the modeling process for crash severity analysis have not been sufficiently explored: sample size, model misspecification and underreporting in crash data. Therefore, in this research, three commonly used traffic crash severity models: multinomial logit model (MNL), ordered probit model (OP) and mixed logit model (ML) were studied in terms of the effects of sample size, model misspecification and underreporting in crash data, via a Monte-Carlo approach using simulated and observed crash data. The results of sample size effects on the three models are consistent with prior expectations in that small sample sizes significantly affect the development of crash severity models, no matter which model type is used. Furthermore, among the three models, the ML model was found to require the largest sample size, while the OP model required the lowest sample size. The sample size requirement for the MNL model is intermediate to the other two models. In addition, when the sample size is sufficient, the results of model misspecification analysis lead to the following suggestions: in order to decrease the bias and variability of estimated parameters, logit models should be selected over probit models. Meanwhile, it was suggested to select more general and flexible model such as those allowing randomness in the parameters, i.e., the ML model. Another important finding was that the analysis of the underreported data for the three models showed that none of the three models was immune to this underreporting issue. In order to minimize the bias and reduce the variability of the model, fatal crashes should be set as the baseline severity for the MNL and ML models while, for the OP models, the rank for the crash severity should be set from fatal to property-damage-only (PDO) in a descending order. Furthermore, when the full or partial information about the unreported rates for each severity level is known, treating crash data as outcome-based samples in model estimation, via the Weighted Exogenous Sample Maximum Likelihood Estimator (WESMLE), dramatically improve the estimation for all three models compared to the result produced from the Maximum Likelihood estimator (MLE).

Page generated in 0.0452 seconds