131 |
An "Interest" Index for WWW Servers and CyberRankingYAMAMOTO, Shuichiro, MOTODA, Toshihiro, HATASHIMA, Takashi 20 April 2000 (has links)
No description available.
|
132 |
Study and Implementation of the Log-Periodic Dipole Array Antenna for Electromagnetic CompatibilityLee, Chih-Chieh 02 July 2002 (has links)
Abstract.
The problem of Electromagnetic compatibility is a popular topic. It is divided into two categories, one is conducted electromagnetic interference, another is radiated interference. We can use LISN to measure the signal of conducted electromagnetic interference , and use antenna to measure the signal of radiated interference. This paper will focus on the antenna.
The frequency range of radiated Electromagnetic Interference measurement is so wide that using the halfwave dipole antenna will be very time-consuming, Therefore broadband antennas are often used in lieu of the halfwave dipole antennas. The design procedure of the log-periodic dipole array antenna is introduced in this article. Simulation data of such antenna using the NEC2 software, including the input impedance and the antenna pattern are also provided. Furthermore, based on the parameters obtained from the simulation, a log-periodic dipole array antenna can be fabricated. In doing so, the simulation results should be modified to take into considerations of the specification of the material used, such as diameters of the materials, and lengths of the transmission lines. Once the construction of the log-periodic dipole array antenna is completes, it can be compared with the simulation results, and the difference between them should be investigated to find out the optimal design parameters. Finally, the antenna factor can be calculated to compare it with the measurement data.
|
133 |
HRM mobil - Körjournal / MRH Mobile – Drives LogKarlsson, Simon, Johansson, Daniel January 2014 (has links)
Denna rapport redogörutvecklingen av ett tilläggi form av körjournal för mobila enheter i enwebbaseradmobilapplikation. Tilläggetskulle ge användarenmöjlighetenatt föra körjournal på resande fot på ett snabbt och enkeltsätt. Några av dagensmodernasteoch populärasteutvecklingsmetoder och verktyganvänds för att uppfylla de krav som ställts. Arbetetutfördes på Flex Datasystemi Örebro. / This report describes the development of a supplementin form ofadriver's log for mobile devices in a web-based mobile application. The supplementwould give the user the opportunity to bring the logbook of travelsin a fast and simple way. Some of today's most modernand popular development methods and tools are used to meet the requirements set. The work was done on the Flex Data System in Örebro.
|
134 |
The effects of clumped log distribution on line intersect samplingTansey, Joshua January 2014 (has links)
Line intersect sampling (LIS) is a method used for quantifying post-harvest waste. It is often used by forest managers to quantify merchantable volume remaining on the cutover so that compensation may be exacted under stumpage contracts.
The theory has been extensively studied and will produce an accurate measure of harvest waste given the basic theoretical assumptions that: all logs are cylindrical, occur horizontally, are randomly orientated and randomly distributed. When these assumptions are violated, the method is not biased, although precision decreases substantially.
A computer simulation was completed to determine whether or not the LIS method is appropriate, given a clumped distribution of logs produced by processing at central sites in cutover before using a forwarder to extract to the landing. The software ArcGIS with the application ModelBuilder was used to produce the LIS Model for running LIS assessments.
It was determined through simulation that the conventional LIS method is not appropriate given these harvesting methods, as a level of bias was found in sampling determining that the LIS method underestimated true volume. T-tests confirmed the significance of this bias.
LIS volume estimates were not precise, with the range of estimates ranging from 0 m3/ha to double the true volume. An increase in sampling length by a third was found to increase precision by only a small amount. Therefore, it was determine that increased sampling is not worthwhile as the costs associated with it do not justify the small increase in precision.
|
135 |
The Automatic Generation of One- and Multi-dimensional Distributions with Transformed Density RejectionLeydold, Josef, Hörmann, Wolfgang January 1997 (has links) (PDF)
A rejection algorithm, called ``transformed density rejection", is presented. It uses a new method for constructing simple hat functions for a unimodal density $f$. It is based on the idea of transforming $f$ with a suitable transformation $T$ such that $T(f(x))$ is concave. The hat function is then constructed by taking the pointwise minimum of tangents which are transformed back to the original scale. The resulting algorithm works very well for a large class of distributions and is fast. The method is also extended to the two- and multidimensional case. (author's abstract) / Series: Preprint Series / Department of Applied Statistics and Data Processing
|
136 |
Une classe d'intervalles bay??siens pour des espaces de param??tres restreintsGhashim, Ehssan January 2013 (has links)
Ce m??moire traite d'une m??thode bay??sienne, analys??e par Marchand et Strawderman (2013), pour la construction d'intervalles bay??siens pour des mod??les de densit??s continues avec contrainte sur l'espace des param??tres ??. Notamment, on obtiendra une classe d'intervalles bay??siens I??0,??(.), associ??s ?? la troncature d'une loi a priori non informative ??0 et g??n??r??s par une fonction de distribution ??(.), avec une probabilit?? de recouvrement born??e inf??rieurement par 1-??/1+??. Cette classe inclut la proc??dure HPD donn??e par Marchand et Strawderman (2006) dans le cas o?? la densit?? sous-jacente d'un pivot est sym??trique. Plusieurs exemples y illustrent la th??orie ??tudi??e. Finalement, on pr??sentera de nouveaux r??sultats pour la probabilit?? de recouvrement des intervalles bay??siens appartenant ?? la classe ??tudi??e pour des densit??s log-concaves. Ces r??sultats ??tablissent la borne inf??rieure ?? 1- 3??/2 et g??n??ralisent les r??sultats de Marchand et al.(2008) tenant sous une hypoth??se de sym??trie.
|
137 |
Non-inferiority hypothesis testing in two-arm trials with log-normal dataWickramasinghe, Lahiru 07 April 2015 (has links)
In health related studies, non-inferiority tests are used to demonstrate that a new treatment is not worse than a currently existing treatment by more than a pre-specified margin. In this thesis, we discuss three approaches; a Z-score approach, a generalized p-value approach and a Bayesian approach, to test the non-inferiority hypotheses in two-arm trials for ratio of log-normal means. The log-normal distribution is widely used to describe the positive random variables with positive skewness which is appealing for data arising from studies with small sample sizes. We demonstrate the approaches using data arising from an experimental aging study on cognitive penetrability of posture control. We also examine the suitability of three methods under various sample sizes via simulations. The results from the simulation studies indicate that the generalized p-value and the Bayesian approaches reach an agreement approximately and the degree of the agreement increases when the sample sizes increase. However, the Z-score approach can produce unsatisfactory results even under large sample sizes.
|
138 |
Local Log-Linear Models for Capture-RecaptureKurtz, Zachary Todd 01 January 2014 (has links)
Capture-recapture (CRC) models use two or more samples, or lists, to estimate the size of a population. In the canonical example, a researcher captures, marks, and releases several samples of fish in a lake. When the fish that are captured more than once are few compared to the total number that are captured, one suspects that the lake contains many more uncaptured fish. This basic intuition motivates CRC models in fields as diverse as epidemiology, entomology, and computer science. We use simulations to study the performance of conventional log-linear models for CRC. Specifically we evaluate model selection criteria, model averaging, an asymptotic variance formula, and several small-sample data adjustments. Next, we argue that interpretable models are essential for credible inference, since sets of models that fit the data equally well can imply vastly different estimates of the population size. A secondary analysis of data on survivors of the World Trade Center attacks illustrates this issue. Our main chapter develops local log-linear models. Heterogeneous populations tend to bias conventional log-linear models. Post-stratification can reduce the effects of heterogeneity by using covariates, such as the age or size of each observed unit, to partition the data into relatively homogeneous post-strata. One can fit a model to each post-stratum and aggregate the resulting estimates across post-strata. We extend post-stratification to its logical extreme by selecting a local log-linear model for each observed point in the covariate space, while smoothing to achieve stability. Local log-linear models serve a dual purpose. Besides estimating the population size, they estimate the rate of missingness as a function of covariates. Simulations demonstrate the superiority of local log-linear models for estimating local rates of missingness for special cases in which the generating model varies over the covariate space. We apply the method to estimate bird species richness in continental North America and to estimate the prevalence of multiple sclerosis in a region of France.
|
139 |
Log Engineering: Towards Systematic Log Mining to Support the Development of Ultra-large Scale SystemsShang, Weiyi 08 May 2014 (has links)
Much of the research in software engineering focuses on understanding the dynamic nature of software systems. Such research typically uses automated instrumentation or profiling techniques on the code. In this thesis, we examine logs as another source of dynamic information. Such information is generated from statements inserted into the code during development to draw the attention of system operators and developers to important run-time events. Such statements reflect the rich experience of system experts. The rich content of logs has led to a new market for log management applications that assist in storing, querying and analyzing logs. Moreover, recent research has demonstrated the importance of logs in understanding and improving software systems. However, developers often treat logs as textual data. We believe that logs have much more potential in assisting developers. Therefore, in this thesis, we propose Log Engineering to systematically leverage logs in order to support the development of ultra-large scale systems.
To motivate this thesis, we first conduct a literature review on the state-of-the-art of software log mining. We find that logging statements and logs from the development environment are rarely leveraged by prior research. Further, current practices of software log mining tend to be ad hoc and do not scale well.
To better understand the current practice of leveraging logs, we study the challenge of understanding logs and study the evolution of logs. We find that knowledge derived from development repositories, such as issue reports, can assist in understanding logs. We also find that logs co-evolve with the code, and that changes to logs are often made without considering the needs of Log Processing Apps that surround the software system. These findings highlight the need for better documentation and tracking approaches for logs.
We then propose log mining approaches to assist the development of systems. We first find that logging characteristics provide strong indicators of defect-prone source code files. Hence, code quality improvement efforts should focus on the code with large amounts of logging statements or their churn. Finally, we present a log mining approach to assist in verifying the deployment of Big Data Analytics applications. / Thesis (Ph.D, Computing) -- Queen's University, 2014-05-08 12:56:23.319
|
140 |
Quantifying the Permeability Heterogeneity of Sandstone Reservoirs in Boonsville Field, Texas by Integrating Core, Well Log and 3D Seismic DataSong, Qian 03 October 2013 (has links)
Increasing hydrocarbon reserves by finding new resources in frontier areas and improving recovery in the mature fields, to meet the high energy demands, is very challenging for the oil industry. Reservoir characterization and heterogeneity studies play an important role in better understanding reservoir performance to meet this industry goal. This study was conducted on the Boonsville Bend Conglomerate reservoir system located in the Fort Worth Basin in central-north Texas. The primary reservoir is characterized as highly heterogeneous conglomeratic sandstone. To find more potential and optimize the field exploitation, it’s critical to better understand the reservoir connectivity and heterogeneity. The goal of this multidisciplinary study was to quantify the permeability heterogeneity of the target reservoir by integrating core, well log and 3D seismic data.
A set of permeability coefficients, variation coefficient, dart coefficient, and contrast coefficient, was defined in this study to quantitatively identify the reservoir heterogeneity levels, which can be used to characterize the intra-bed and inter-bed heterogeneity. Post-stack seismic inversion was conducted to produce the key attribute, acoustic impedance, for the calibration of log properties with seismic. The inverted acoustic impedance was then used to derive the porosity volume in Emerge (the module from Hampson Russell) by means of single and multiple attributes transforms and neural network. Establishment of the correlation between permeability and porosity is critical for the permeability conversion, which was achieved by using the porosity and permeability pairs measured from four cores. Permeability volume was then converted by applying this correlation. Finally, the three heterogeneity coefficients were applied to the permeability volume to quantitatively identify the target reservoir heterogeneity. It proves that the target interval is highly heterogeneous both vertically and laterally. The heterogeneity distribution was obtained, which can help optimize the field exploitation or infill drilling designs.
|
Page generated in 0.2311 seconds