• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 19624
  • 3369
  • 2417
  • 2007
  • 1551
  • 1432
  • 877
  • 406
  • 390
  • 359
  • 297
  • 233
  • 208
  • 208
  • 208
  • Tagged with
  • 38106
  • 12455
  • 9251
  • 7104
  • 6698
  • 5896
  • 5284
  • 5196
  • 4722
  • 3452
  • 3302
  • 2810
  • 2725
  • 2537
  • 2116
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
561

Range restriction for general formulas

Brass, Stefan January 2010 (has links)
Deductive databases need general formulas in rule bodies, not only conjuctions of literals. This is well known since the work of Lloyd and Topor about extended logic programming. Of course, formulas must be restricted in such a way that they can be effectively evaluated in finite time, and produce only a finite number of new tuples (in each iteration of the TP-operator: the fixpoint can still be infinite). It is also necessary to respect binding restrictions of built-in predicates: many of these predicates can be executed only when certain arguments are ground. Whereas for standard logic programming rules, questions of safety, allowedness, and range-restriction are relatively easy and well understood, the situation for general formulas is a bit more complicated. We give a syntactic analysis of formulas that guarantees the necessary properties.
562

Transforming imperative algorithms to constraint handling rules

Abdennadher, Slim, Ismail, Haythem, Khoury, Frederick January 2010 (has links)
Different properties of programs, implemented in Constraint Handling Rules (CHR), have already been investigated. Proving these properties in CHR is fairly simpler than proving them in any type of imperative programming language, which triggered the proposal of a methodology to map imperative programs into equivalent CHR. The equivalence of both programs implies that if a property is satisfied for one, then it is satisfied for the other. The mapping methodology could be put to other beneficial uses. One such use is the automatic generation of global constraints, at an attempt to demonstrate the benefits of having a rule-based implementation for constraint solvers.
563

Persistent constraints in constraint handling rules

Betz, Hariolf, Raiser, Frank, Frühwirth, Thom January 2010 (has links)
In the most abstract definition of its operational semantics, the declarative and concurrent programming language CHR is trivially non-terminating for a significant class of programs. Common refinements of this definition, in closing the gap to real-world implementations, compromise on declarativity and/or concurrency. Building on recent work and the notion of persistent constraints, we introduce an operational semantics avoiding trivial non-termination without compromising on its essential features.
564

A tool for generating partition schedules of multiprocessor systems

Goltz, Hans-Joachim, Pieth, Norbert January 2010 (has links)
A deterministic cycle scheduling of partitions at the operating system level is supposed for a multiprocessor system. In this paper, we propose a tool for generating such schedules. We use constraint based programming and develop methods and concepts for a combined interactive and automatic partition scheduling system. This paper is also devoted to basic methods and techniques for modeling and solving this partition scheduling problem. Initial application of our partition scheduling tool has proved successful and demonstrated the suitability of the methods used.
565

Efficiency of difference-list programming

Geske, Ulrich, Goltz, Hans-Joachim January 2010 (has links)
The difference-list technique is described in literature as effective method for extending lists to the right without using calls of append/3. There exist some proposals for automatic transformation of list programs into differencelist programs. However, we are interested in construction of difference-list programs by the programmer, avoiding the need of a transformation step. In [GG09] it was demonstrated, how left-recursive procedures with a dangling call of append/3 can be transformed into right-recursion using the unfolding technique. For simplification of writing difference-list programs using a new cons/2 procedure was introduced. In the present paper, we investigate how efficieny is influenced using cons/2. We measure the efficiency of procedures using accumulator technique, cons/2, DCG’s, and difference lists and compute the resulting speedup in respect to the simple procedure definition using append/3. Four Prolog systems were investigated and we found different behaviour concerning the speedup by difference lists. A result of our investigations is, that an often advice given in the literature for avoiding calls append/3 could not be confirmed in this strong formulation.
566

AN EFFICIENT SET-BASED APPROACH TO MINING ASSOCIATION RULES

Hsieh, Yu-Ming 28 July 2000 (has links)
Discovery of {it association rules} is an important problem in the area of data mining. Given a database of sales transactions, it is desirable to discover the important associations among items such that the presence of some items in a transaction will imply the presence of other items in the same transaction. Since mining association rules may require to repeatedly scan through a large transaction database to find different association patterns, the amount of processing could be huge, and performance improvement is an essential concern. Among this problem, how to efficiently {it count large itemsets} is the major work, where a large itemset is a set of items appearing in a sufficient number of transactions. In this thesis, we propose efficient algorithms for mining association rules based on a high-level set-based approach. A set-based approach allows a clear expression of what needs to be done as opposed to specifying exactly how the operations are carried out in a low-level approach, where a low-level approach means to retrieve one tuple from the database at a time. The advantage of the set-based approach, like the SETM algorithm, is simple and stable over the range of parameter values. However, the SETM algorithm proposed by Houtsma and Swami may generate too many invalid candidate itemsets. Therefore, in this thesis, we propose a set-based algorithm called SETM*, which provides the same advantages of the SETM algorithm, while it avoids the disadvantages of the SETM algorithm. In the SETM* algorithm, we reduce the size of the candidate database by modifying the way of constructing it, where a candidate database is a transaction database formed with candidate $k$-itemsets. Then, based on the new way to construct the candidate database in the SETM* algorithm, we propose SETM*-2K, mbox{SETM*-MaxK} and SETM*-Lmax algorithms. In the SETM*-2K algorithm, given a $k$, we efficiently construct $L_{k}$ based on $L_{w}$, where $w=2^{lceil log_{2}k ceil - 1}$, instead of step by step. In the SETM*-MaxK algorithm, we efficiently to find the $L_{k}$ based on $L_{w}$, where $L_{k} ot= emptyset, L_{k+1}=emptyset$ and $w=2^{lceil log_{2}k ceil - 1}$, instead of step by step. In the SETM*-Lmax algorithm, we use a forward approach to find all maximal large itemsets from $L_{k}$, and the $k$-itemset is not included in the $k$-subsets of the $j$-itemset, except $k=MaxK$, where $1 leq k < j leq MaxK$, $L_{MaxK} ot= emptyset$ and $L_{MaxK+1}=emptyset$. We conduct several experiments using different synthetic relational databases. The simulation results show that the SETM* algorithm outperforms the SETM algorithm in terms of storage space or the execution time for all relational database settings. Moreover, we show that the proposed SETM*-2K and SETM*-MaxK algorithms also require shorter time to achieve their goals than the SETM or SETM* algorithms. Furthermore, we also show that the proposed forward approach (SETM*-Lmax) to find all maximal large itemsets requires shorter time than the backward approach proposed by Agrawal.
567

Topics in analyzing longitudinal data

Ju, Hyunsu 17 February 2005 (has links)
We propose methods for analyzing longitudinal data, obtained in clinical trials and other applications with repeated measures of responses taken over time. Common characteristics of longitudinal studies are correlated responses and observations taken at unequal points in time. The first part of this dissertation examines the justification of a block bootstrap procedure for the repeated measurement designs, which takes into account the dependence structure of the data by resampling blocks of adjacent observations rather than individual data points. In the case of dependent stationary data, under regular conditions, the approximately studentized or standardized block bootstrap possesses a higher order of accuracy. With longitudinal data, the second part of this dissertation shows that the diagonal optimal weights for unbalanced designs can be made to improve the efficiency of the estimators in terms of mean squared error criterion. Simulation study is conducted for each of the longitudinal designs. We will also analyze repeated measurement data set concerning nursing home residents with multiple sclerosis, which is obtained from a large database termed the minimum data set (MDS).
568

Mesoscale ensemble-based data assimilation and parameter estimation

Aksoy, Altug 01 November 2005 (has links)
The performance of the ensemble Kalman filter (EnKF) in forced, dissipative flow under imperfect model conditions is investigated through simultaneous state and parameter estimation where the source of model error is the uncertainty in the model parameters. Two numerical models with increasing complexity are used with simulated observations. For lower complexity, a two-dimensional, nonlinear, hydrostatic, non-rotating, and incompressible sea breeze model is developed with buoyancy and vorticity as the prognostic variables. Model resolution is 4 km horizontally and 50 m vertically. The ensemble size is set at 40. Forcing is maintained through an explicit heating function with additive stochastic noise. Simulated buoyancy observations on land surface with 40-km spacing are assimilated every 3 hours. Up to six model parameters are successfully subjected to estimation attempts in various experiments. The overall EnKF performance in terms of the error statistics is found to be superior to the worst-case scenario (when there is parameter error but no parameter estimation is performed) with an average error reduction in buoyancy and vorticity of 40% and 46%, respectively, for the simultaneous estimation of six parameters. The model chosen to represent the complexity of operational weather forecasting is the Pennsylvania State University-National Center for Atmospheric Research MM5 model with a 36-km horizontal resolution and 43 vertical layers. The ensemble size for all experiments is chosen as 40 and a 41st member is generated as the truth with the same ensemble statistics. Assimilations are performed with a 12-hour interval with simulated sounding and surface observations of horizontal winds and temperature. Only single-parameter experiments are performed focusing on a constant inserted into the code as the multiplier of the vertical eddy mixing coefficient. Estimation experiments produce very encouraging results and the mean estimated parameter value nicely converges to the true value exhibiting a satisfactory level of variability.
569

Fast history matching of time-lapse seismic and production data for high resolution models

Jimenez, Eduardo Antonio 10 October 2008 (has links)
Integrated reservoir modeling has become an important part of day-to-day decision analysis in oil and gas management practices. A very attractive and promising technology is the use of time-lapse or 4D seismic as an essential component in subsurface modeling. Today, 4D seismic is enabling oil companies to optimize production and increase recovery through monitoring fluid movements throughout the reservoir. 4D seismic advances are also being driven by an increased need by the petroleum engineering community to become more quantitative and accurate in our ability to monitor reservoir processes. Qualitative interpretations of time-lapse anomalies are being replaced by quantitative inversions of 4D seismic data to produce accurate maps of fluid saturations, pore pressure, temperature, among others. Within all steps involved in this subsurface modeling process, the most demanding one is integrating the geologic model with dynamic field data, including 4Dseismic when available. The validation of the geologic model with observed dynamic data is accomplished through a "history matching" (HM) process typically carried out with well-based measurements. Due to low resolution of production data, the validation process is severely limited in its reservoir areal coverage, compromising the quality of the model and any subsequent predictive exercise. This research will aim to provide a novel history matching approach that can use information from high-resolution seismic data to supplement the areally sparse production data. The proposed approach will utilize streamline-derived sensitivities as means of relating the forward model performance with the prior geologic model. The essential ideas underlying this approach are similar to those used for high-frequency approximations in seismic wave propagation. In both cases, this leads to solutions that are defined along "streamlines" (fluid flow), or "rays" (seismic wave propagation). Synthetic and field data examples will be used extensively to demonstrate the value and contribution of this work. Our results show that the problem of non-uniqueness in this complex history matching problem is greatly reduced when constraints in the form of saturation maps from spatially closely sampled seismic data are included. Further on, our methodology can be used to quickly identify discrepancies between static and dynamic modeling. Reducing this gap will ensure robust and reliable models leading to accurate predictions and ultimately an optimum hydrocarbon extraction.
570

Performance Evaluation of Data Access Latency Hiding Techniques in Processor Design

Jhang, Jia-hao 11 September 2007 (has links)
Due to the effect of deep submicron technology, the gap between processor speed and memory access performance increases continuingly. In order to improve performance degradation due to the performance gap, one way is to fetch the data about to be accessed by processor to buffer memory on the processor chip in advance. The memory access waiting time can thus reduced between main memory and cache memory on the processor. Previous research utilizes low-level techniques to pre-fetch data, such as insertion of pre-fetch instructions and pre-fetch with predicted data location based on dynamic learning. They do not utilize analysis on program¡¦s high-level data structure to assist data pre-fetch. In this research, we carried out performance evaluation on our proposed data pre-fetch technique based on analysis of high-level data structures. We also compare our method with some existing low-level data pre-fetch techniques. The evaluation metrics includes the accuracy of data pre-fetches, memory latency hiding, and overall execution performance.

Page generated in 0.161 seconds