• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 758
  • 222
  • 87
  • 68
  • 60
  • 33
  • 30
  • 24
  • 20
  • 15
  • 10
  • 7
  • 7
  • 6
  • 5
  • Tagged with
  • 1550
  • 271
  • 203
  • 186
  • 154
  • 147
  • 143
  • 143
  • 128
  • 125
  • 87
  • 87
  • 85
  • 81
  • 81
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
361

Energetics of cholesterol-modulated membrane permeabilities. A simulation study

Wennberg, Christian January 2011 (has links)
Molecular dynamics simulations were used to study the permeation of four different solutesthrough different cholesterol containing lipid bilayers. In all bilayers the limiting permeationbarrier shifted towards the hydrophobic core, as the cholesterol concentration was increased.Cholesterols reducing effect on the permeation rate was observed, but under certainconditions results indicating an increased permeation rate with increasing cholesterolconcentration were also obtained.
362

Estimation Techniques for Nonlinear Functions of the Steady-State Mean in Computer Simulation

Chang, Byeong-Yun 08 December 2004 (has links)
A simulation study consists of several steps such as data collection, coding and model verification, model validation, experimental design, output data analysis, and implementation. Our research concentrates on output data analysis. In this field, many researchers have studied how to construct confidence intervals for the mean u of a stationary stochastic process. However, the estimation of the value of a nonlinear function f(u) has not received a lot of attention in the simulation literature. Towards this goal, a batch-means-based methodology was proposed by Munoz and Glynn (1997). Their approach did not consider consistent estimators for the variance of the point estimator for f(u). This thesis, however, will consider consistent variance estimation techniques to construct confidence intervals for f(u). Specifically, we propose methods based on the combination of the delta method and nonoverlapping batch means (NBM), standardized time series (STS), or a combination of both. Our approaches are tested on moving average, autoregressive, and M/M/1 queueing processes. The results show that the resulting confidence intervals (CIs) perform often better than the CIs based on the method of Munoz and Glynn in terms of coverage, the mean of their CI half-width, and the variance of their CI half-width.
363

Automatic Recognition of Artificial Objects in Side-scan Sonar Imerage

Li, Ying-Zhang 02 August 2011 (has links)
Abstract The interpretation and identification of information from the side-scan sonar imagery are mainly depended on visual observation and personal experiences. Recent studies tended to increase the identification efficiency by using numerical analysis methods. This can reduce the error that cause by the differences of observer¡¦s experience as well as by extended time observation. The position around the center line of the slant range corrected side-scan sonar imagery might result in the degradation of the ability of numerical methods to successfully detect artificial objects. Theoretically, this problem could be solved by using a specific characteristic function to identify the existence of concrete reefs, and then filtering the noise of the central line area with a threshold value. This study was intended to develop fully automatic sonar imagery processing system for the identification of cubic concrete and cross-type protective artificial reefs in Taiwan offshore area. The procedures of the automatic sonar imagery processing system are as follows: (1) Image Acquisition¡G500kHz with slant range of 75m. (2) Feature Extraction¡Ggrey level co-occurrence matrix (i.e., Entropy, Homogeneity and Mean) (3) Classification¡Gunsupervised Bayesian classifier. (4) Object Identification¡Gby characteristic feature (i.e., Entropy). (5) Object¡¦s Status Analysis¡Gobject¡¦s circumference¡Barea¡Bcenter of mass and quantity. This study used the sonar images collected at Chey-Ding artificial reef site in Kaohsiung City as a case study, aiming to verify the automatic sonar imagery processing system and find out the optimum window size. The image characteristic functions include one set of first order parameter (i.e., mean) and two sets of second order parameter (i.e., entropy and homogeneity). Eight(8) sonar images with 1-8 sets of cubic concrete and cross-type protective artificial reefs where used in this step. The identification efficiency of the system, in terms of the produce¡¦s accuracy, is 79.41%. The results illustrated that there were 16~28 sets of artificial reefs being detected in this case which is comparable with the actual amount of 17 sets. Based on this investigation, the optimum window size was concluded to be 12¡Ñ12 pixels with sliding size of 4 pixel. Imagery collected at Fang-Liau artificial reef site of Pingtung County was tested. For the purpose of applicability, the original imagery (2048¡Ñ2800 pixels) was divided into 8 consecutive smaller sized imagery frames(2048¡Ñ350 pixels). The influence of using a two-fold classification procedure and a central filtering method to reduce the noise that caused by slant range correction were discussed. The results showed that central line filtering method is applicable. The results of object¡¦s status analysis showed that there are 156-236 sets of reefs existed. Automatic determination of the target using the characteristic function of entropy is feasible. If the value is larger than 1.45, it represents positive identification of concrete artificial reefs. It can be classified as muddy sand seabed type if the value is smaller than 1.35. If the value is between 1.35~1.45, it illustrates the existence of a transition zone where objects of smaller in dimensions might exist. To achieve the purpose of automatic operation, firstly, we have to identify the existence of the concrete reefs by using the specific characteristic function. Based on the result of existing concrete reefs, suture line filtering method will hence be used to filter the noise from the image information. For that all of the procedures are automatically operated without human intervention. Key word: side-scan sonar ; characteristic function ; gray level co-occurrence matrix ; Bayesian classification ;entropy ; homogeneity ; mean
364

A Study on Parameter Identification of Induction Machine

Su, Tzu-Jung 03 August 2011 (has links)
Parameter identification of an induction machine is of great importance in numerous industrial applications, including the assessment of machine performance and design of control schemes. Parameter identification is based on the input-output signals and the model used. Many researches have applied the inverter drive to control the exciting signal of the induction machine in the identifying process. This study proposed a method to identify all parameter of the induction machine with a no-load low-voltage starting test. The method has a simple structure without needing extra hardware, which could significantly simplify the procedures and save cost. Based on the curves of resistance and reactance, the user can obtain the machine¡¦s equivalent circuit parameters. With the identified parameters of the equivalent circuit, input voltage, and rotor speed, the user can find the torque. From the torque and rotor speed, the user can find the mechanical parameters. A least mean square (LMS) method was used with a particle swarm optimization (PSO) method to solve the aforementioned problem. From various tests, the practicability and accuracy of this method can been proven. This study also proposes a method to rapidly analyze power parameters. This method uses two adjacent data to compute the fundamental frequency component of voltage or current. The parameters of fundamental frequency component include frequency, amplitude, and phase. Under the condition of varied parameters, the frequency and phase are dependent. This method fixes the frequency and computes the amplitude and phase, and then stable results will be obtained.
365

Application of PMV Fuzzy Control Algorithm in Pursuing Optimum Thermal Comfort

Fang, Wen-Hong 19 June 2012 (has links)
The exhausting fossil fuels have stimulated heating researches on alternative renewable energy, as well as energy friendly studies. In a country like Taiwan, with high density on population and buildings, fresh cold air are supplied by either fan-coil units or air-condition units. However, with the lack of intelligent control and poor justification on thermal comfort, these machines failed to provide optimal thermal comfort, a situation that always leads to "excessive control" and energy waste as a consequence. Optimal thermal comfort is pursued by using PMV fuzzy control theory, along with thermal comfort monitoring system derived from LabView icon-control software. Thermal Comfort indices such as Predicted Mean Vote (PMV) and Predicted Percent of Dissatisfied (PPD) according to the ISO 7730 are used as indicators of thermal comfort.Sensors, conscious of variations in humidity and temperatures, can figure out PMV and PPD via LabView Online Real Time calculation, and then we can control the environment comfort around PMV=1 next by using fuzzy control theory as well as energy efficient equipment such as AC stepless fans and AC stepless heaters. Many comfort simulation cases, comfort simulation with random humidity and temperatures, and a 12-hour automatic control, were presented as three testing items to check whether PMV FUZZY algorithm is competitive in fixing the environment thermal comfort around PMV=1. The confirmation of this question can be proved by this empirical study.
366

The study on diffusion behaviors of water molecules within carbon nanocoils by molecular dynamics simulation

Chen, Ming-Chang 08 August 2012 (has links)
In this study, molecular dynamics (MD) simulations was employed to investigate (5,5), (10,10) single-walled nanocoils and (5,5)@(10,10) double-walled carbon nanocoils. The study can be arranged into two parts¡G In part I: Investigate the mechanical properties of (5,5), (10,10) single-walled nanocoils and (5,5)@(10,10) double-walled carbon nanocoils. The second reactive empirical bond order (REBO) potential was employed to model the interaction between carbon atoms. The contours of atomic slip vector and sequential slip vector were used to investigate the structural variations at different strains during the tension process. The yielding stress, maximum tensile strength, and Young¡¦s modulus were determined from the tensile stress-strain profiles. The results show that the nanocoils have superelastic characteristics to the carbon nanotube in the same tube diameter. In part II: Investigate the diffusion behavior of water molecules confined inside narrow (5,5) and (10,10) carbon nanocoils under different tensile strains. The condensed-phase optimized molecular potentials for atomistic simulation studies (COMPASS) potential was employed to model the interaction between carbon-carbon atoms¡Acarbon atoms-water molecules and water-water molecules. To analysis the kinetic behavior of water molecules in two carbon nanocoils, the diffusion coefficients, square displacement (SD) and mean square displacement (MSD) of water molecules were calculated. The results show that diffusion coefficient of water will increase with the strains of carbon nanocoils. However, the diffusion coefficient has a significant decrease in a large strain due to the structural deformation of carbon nanocoils. The diffusion behaviors of water inside the (5,5) and (10,10) carbon nanotubes were also investigated to compare the results in (5,5) and (10,10) carbon nanotubes. Our results indicate that two carbon nanocoils have a lower diffusion coefficient of water than that of carbon nanotubes because the geometry of carbon nanocoil is easily to block up the diffusion of water molecules.
367

Entrainment and mixing properties of multiphase plumes: Experimental studies on turbulence and scalar structure of a bubble plume

Seol, Dong Guan 15 May 2009 (has links)
This dissertation presents a series of laboratory experiments to study flow and mixing properties of multiphase plumes. The particle image velocimetry (PIV) and laserinduced fluorescence (LIF) techniques are developed to measure two-dimensional velocity and concentration fields of multiphase plumes. The developed measurement techniques are applied to bubble plumes in different ambient conditions. The problems and errors in the two-phase PIV application to a bubble plume case are addressed through a comparative study between the optical separation method using fluorescent particles and a new phase separation method using vector postprocessing. The study shows that the new algorithm predicts well the instantaneous and time-averaged velocity profiles and has errors comparable to those for image masking techniques. The phase separation method developed in the previous section is applied to study the mean flow characteristics of a bubble plume in quiescent and unstratified condition. The entrainment coefficients representing the mixing properties of a bubble plume are calculated to lie between 0.08 near the plume source and 0.05 in the upper region, and to depend on the non-dimensional quantity us/(B/z)1/3, where us is the bubble slip velocity, B is the initial buoyancy flux, and z is the height from the diffuser. Further, the LIF technique is investigated to measure the scalar concentration field around a bubble plume in quiescent, unstratified condition. This new application to bubble plumes accounts for light scattering by bubbles using an attenuation coef- ficient that is proportional to the local void fraction. Measured scalar concentration fields show similar trend in concentration fluctuation to turbulent plume cases. Finally, the velocity and concentration field measurements using the developed two-phase PIV and LIF methods are applied for a bubble plume in a density-stratified ambient. The turbulent flow characteristics induced by a bubble plume in a stratified ambient water are studied. The plume fluctuation frequency is measured as about 0.1 Hz and compares well to plume wandering frequency measured in unstratified plume cases.
368

A Study of Optimal Portfolio Decision and Performance Measures

Chen, Hsin-Hung 03 June 2004 (has links)
Since most financial institutions use the Sharpe Ratio to evaluate the performance of mutual funds, the objective of most fund managers is to select the portfolio that can generate the highest Sharpe Ratio. Traditionally, they can revise the objective function of the Markowitz mean-variance portfolio model and resolve non-linear programming to obtain the maximum Sharpe Ratio portfolio. In the scenario with short sales allowed, this project will propose a closed-form solution for the optimal Sharpe Ratio portfolio by applying Cauchy-Schwarz maximization. This method without using a non-linear programming computer program is easier than traditional method to implement and can save computing time and costs. Furthermore, in the scenarios with short sales disallowed, we will use Kuhn-Tucker conditions to find the optimal Sharpe Ratio portfolio. On the other hand, an efficient frontier generated by Markowitz mean-variance portfolio model normally has higher risk higher return characteristic, which often causes dilemma for decision maker. This research applies generalized loss function to create a family of decision-aid performance measures called IRp which can well tradeoff return with risk. We compare IRp with Sharpe Ratio and utility functions to confirm that IRp measures are approapriate to evaluate portfolio performance on efficient frontier and to improve asset allocation decisions. In addition, empirical data of domestic and international investment instruments will be used to examine the feasibility and fitness of the new proposed method and IRp measures. This study applies the methods of Cauchy-Schwarz maximization in multivariate statistical analysis and loss function in quality engineering to portfolio decisions. We believe these new applications will complete portfolio model theory and will be meaningful for academic and business fields.
369

Mean-field reflection of omni-directional acoustic wave from rough seabed with non-uniform sediment layers

Wu, Yung-Hong 23 June 2004 (has links)
Omni-directional acoustic wave source interactions with a rough seabed with a continuously varying density and sound speed in a fluid-like sediment layer. The acoustic properties in the sediment layer possess an exponential type of variation in density and one of the three classes of sound speed profiles, which are constant,~$k^2$-linear, or inverse-square variations. Analytical solution of mean field. The mean field reflection coefficients corresponding to the aforementioned density and sound speed profiles for various frequencies, roughness parameters, are numberically generated and analyzed. Physical interpretations are provided for various results. This simple model characterizes two important features of sea floor, including seabed roughness, sediment inhomogenieties, therefore, provide a canonical analysis in seabed acoustics.
370

Model robust designs for binary response experiments

Huang, Shih-hao 06 July 2004 (has links)
The binary response experiments are often used in many areas. In many investigations, different kinds of optimal designs are discussed under an assumed model. There are also some discussions on optimal designs for discriminating models. The main goal in this work is to find an optimal design with two support points which minimizes the maximal probability differences between possible models from two types of symmetric location and scale families. It is called the minimum bias two-points design, or the $mB_2$ design in short here. D- and A-efficiencies of the $mB_2$ design obtained here are evaluated under an assumed model. Furthermore, when the assumed model is incorrect, the biases and the mean square errors in evaluating the true probabilities are computed and compared with that by using the D- and A-optimal designs for the incorrectly assumed model.

Page generated in 0.0482 seconds