• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 131
  • 63
  • 41
  • 36
  • 14
  • 5
  • 4
  • 4
  • 4
  • 4
  • 4
  • 4
  • 3
  • 3
  • 3
  • Tagged with
  • 383
  • 45
  • 45
  • 41
  • 39
  • 29
  • 29
  • 28
  • 26
  • 20
  • 20
  • 20
  • 17
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
241

Analýza procesů formování a rozpadu rodin s využitím modelů vícestavové demografie / Analysis of family formation and dissolution processes using multistate demography modelling

Dušek, Zdeněk January 2010 (has links)
The main aim of this thesis is to analyse the development of the processes of family formation and dissolution in the Czech Republic between 1993 and 2008 using multistate demography. By this method, we analyze probability of transition between individual marriage states and events, which characterise these transitions. The first part of the analysis deals with the population as a whole, while the second part analyses only the part of population in fertile ages. LIPRO program was used to create a model of marital status based on data for the Czech Republic as well as for producing a forecast of population divided by marital status for the years of 2019?2023. Other parts of this thesis support the main aim of the thesis. The level of marriage and divorce rate were analysed between 1993 and 2008 as well as potential factors that could influence the level of above mentioned processes. Rising number of cohabitations could be one of those factors and therefore a special attention is paid to the development of this type of coexistence. Socioeconomic development and family and social policy are mentioned as well, because they could also have significant impact on the processes of formation and dissolution of families. The above mentioned factors and their relationship to the marriage rate are analysed using the...
242

DeExcelerator: A Framework for Extracting Relational Data From Partially Structured Documents

Eberius, Julian, Werner, Christopher, Thiele, Maik, Braunschweig, Katrin, Dannecker, Lars, Lehner, Wolfgang 09 June 2021 (has links)
Of the structured data published on the web, for instance as datasets on Open Data Platforms such as data.gov, but also in the form of HTML tables on the general web, only a small part is in a relational form. Instead the data is intermingled with formatting, layout and textual metadata, i.e., it is contained in partially structured documents. This makes transformation into a true relational form necessary, which is a precondition for most forms of data analysis and data integration. Studying data.gov as an example source for partially structured documents, we present a classification of typical normalization problems. We then present the DeExcelerator, which is a framework for extracting relations from partially structured documents such as spreadsheets and HTML tables.
243

The Cobe DIRBE Point Source Catalog

Smith, Beverly J., Price, Stephan D., Baker, Rachel I. 01 October 2004 (has links)
We present the COBE DIRBE Point Source Catalog, an all-sky catalog containing infrared photometry in 10 infrared bands from 1.25 to 240 μm for 11,788 of the brightest near and mid-infrared point sources in the sky. Since DIRBE had excellent temporal coverage (100-1900 independent measurements per object during the 10 month cryogenic mission), the Catalog also contains information about variability at each wavelength, including amplitudes of variation observed during the mission. Since the DIRBE spatial resolution is relatively poor (0°.7), we have carefully investigated the question of confusion and have flagged sources with infrared-bright companions within the DIRBE beam. In addition, we filtered the DIRBE light curves, for data points affected by companions outside of the main DIRBE beam but within the "sky" portion of the scan. At high Galactic latitudes (|b| > 5°), the Catalog contains essentially all of the unconfused sources with flux densities greater than 90, 60, 60, 50, 90, and 165 Jy at 1.25, 2.2, 3.5, 4.9, 12, and 25 μm, respectively, corresponding to magnitude limits of approximately 3.1, 2.6, 1.7, 1.3, -1.3, and -3.5. At longer wavelengths and in the Galactic plane, the completeness is less certain because of the large DIRBE beam and possible contributions from extended emission. The Catalog also contains the names of the sources in other catalogs, their spectral types, variability types, and whether or not the sources are known OH/IR stars. We discuss a few remarkable objects in the Catalog, including the extremely red object OH 231.8+4.2 (QX Pup), an asymptotic giant branch star in transition to a protoplanetary nebula, which has a DIRBE 25 μm amplitude of 0.29 ± 0.07 mag.
244

Modelling and observation of exhaust gas concentrations for diesel engine control

Blanco Rodríguez, David 07 October 2013 (has links)
La Tesis Doctoral estudia la observaci'on en tiempo real de la concentraci'on en el colector de escape de 'oxidos de nitr'ogeno (NOx) y del dosado en motores diesel sobrealimentados (¿ '1 ). Para ello se combinan dos fuentes de informaci'on diferentes: ¿ Sensores capaces de proporcionar una media de dichas variables, ¿ y modelos orientados a control que estiman estas variables a partir de otras medidas del motor. El trabajo parte de la evaluaci'on de la precisi'on de los sensores, realizada mediante la comparaci'on de su medida con la proporcionada por equipos anal'¿ticos de alta precisi'on, que son usados como est'andares de calibraci'on est'atica. Tambi'en se desarrollan en la Tesis m'etodos para la calibraci'on de la din'amica del sensor; dichos m'etodos permiten identi¿car un modelo de comportamiento del sensor y revelar su velocidad de respuesta. En general, estos sensores demuestran ser precisos pero relativamente lentos. Por otra parte, se proponen modelos r'apidos para la estimaci'on de NOx y ¿ '1 . Estos m'etodos, basados en relaciones f'¿sicas, tablas de par'ametros y una serie de correcciones, emplean las medidas proporcionadas por otros sensores con el ¿n de proporcionar una estimaci'on de las variables de inter'es. Los modelos permiten una estimaci'on muy r'apida, pero resultan afectados por efectos de deriva que comprometen su precisi'on. Con el ¿n de aprovechar las caracter'¿sticas din'amicas del modelo y mantener la precisi'on en estado estacionario del sensor, se proponen t'ecnicas de fusi'on de la informaci'on basadas en la aplicaci'on de ¿ltros de Kalman (KF). En primer lugar, se dise¿na un KF capaz de combinar ambas fuentes de informaci'on y corregir en tiempo real el sesgo entre las dos se¿nales. Posteriormente, se estudia la adaptaci'on en tiempo real de los par'ametros del modelo con el ¿n de corregir de forma autom'atica los problemas de deriva asociados al uso de modelos. Todos los m'etodos y procedimientos desarrollados a lo largo de la presente Tesis Doctoral se han aplicado de forma experimental a la estimaci'on de NOx y ¿ '1 . De forma adicional, la Tesis Doctoral desarrolla aspectos relativos a la transferencia de estos m'etodos a los motores de serie. / The dissertation covers the problem of the online estimation of diesel engine exhaust concentrations of NOx and '1. Two information sources are utilised: ¿ on-board sensors for measuring NOx and '1, and ¿ control oriented models (COM) in order to predict NOx and '1. The evaluation of the static accuracy of these sensors is made by comparing the outputs with a gas analyser, while the dynamics are identified on-board by perform- ing step-like transitions on NOx and '1 after modifying ECU actuation variables. Different methods for identifying the dynamic output of the sensors are developed in this work; these methods allow to identify the time response and delay of the sensors if a sufficient data set is available. In general, these sensors are accurate but present slow responses. Afterwards, control oriented models for estimating NOx and '1 are proposed. Regarding '1 prediction, the computation is based on the relative fuel-to-air ratio, where fuel comes from an ECU model and air mass flow is measured by a sensor. For the case of NOx, a set-point relative model based on look-up tables is fitted for representing nominal engine emissions with an exponential correction based on the intake oxygen variation. Different corrections factor for modeling other effects such as the thermal loading of the engine are also proposed. The model is able to predict NOx fast with a low error and a simple structure. Despite of using models or sensors, model drift and sensor dynamic deficiencies affect the final estimation. In order to solve these problems, data fusion strategies are proposed by combining the steady-state accuracy of the sensor and the fast estimation of the models by means of applying Kalman filters (KF). In a first approach, a drift correction model tracks the bias between the model and the sensor but keeping the fast response of the model. In a second approach, the updating of look-up tables by using observers is coped with different versions based on the extended Kalman filter (EKF). Particularly, a simplified KF allows to observe the parameters with a low computational effort. Finally, the methods and algorithms developed in this work are combined and applied to the estimation of NOx and '1. Additionally, the dissertation covers aspects relative to the implementation of the methods in series engines. / Blanco Rodríguez, D. (2013). Modelling and observation of exhaust gas concentrations for diesel engine control [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/32666 / TESIS / Premios Extraordinarios de tesis doctorales
245

Evoluční optimalizace turnusů jízdních řádů / Evolutionary Optimization of Tour Timetables

Filák, Jakub January 2009 (has links)
This thesis deals with the problem of vehicle scheduling in public transport. It contains a theoretical introduction to vehicles scheduling and evolutionary algorithms. Vehicle scheduling is analyzed with respect to the bus timetables. Analysis of evolutionary algorithms is done with emphasis on the genetic algorithms and tabu-search method After the theoretical introduction, a memetic algorithm for the given problem is analyzed. Finally, the thesis contains a description of the optimization system implementation and discusses the experiments with the system.
246

Assessing Crash Occurrence On Urban Freeways Using Static And Dynamic Factors By Applying A System Of Interrelated Equations

Pemmanaboina, Rajashekar 01 January 2005 (has links)
Traffic crashes have been identified as one of the main causes of death in the US, making road safety a high priority issue that needs urgent attention. Recognizing the fact that more and effective research has to be done in this area, this thesis aims mainly at developing different statistical models related to the road safety. The thesis includes three main sections: 1) overall crash frequency analysis using negative binomial models, 2) seemingly unrelated negative binomial (SUNB) models for different categories of crashes divided based on type of crash, or condition in which they occur, 3) safety models to determine the probability of crash occurrence, including a rainfall index that has been estimated using a logistic regression model. The study corridor is a 36.25 mile stretch of Interstate 4 in Central Florida. For the first two sections, crash cases from 1999 through 2002 were considered. Conventionally most of the crash frequency analysis model all crashes, instead of dividing them based on type of crash, peaking conditions, availability of light, severity, or pavement condition, etc. Also researchers traditionally used AADT to represent traffic volumes in their models. These two cases are examples of macroscopic crash frequency modeling. To investigate the microscopic models, and to identify the significant factors related to crash occurrence, a preliminary study (first analysis) explored the use of microscopic traffic volumes related to crash occurrence by comparing AADT/VMT with five to twenty minute volumes immediately preceding the crash. It was found that the volumes just before the time of crash occurrence proved to be a better predictor of crash frequency than AADT. The results also showed that road curvature, median type, number of lanes, pavement surface type and presence of on/off-ramps are among the significant factors that contribute to crash occurrence. In the second analysis various possible crash categories were prepared to exactly identify the factors related to them, using various roadway, geometric, and microscopic traffic variables. Five different categories are prepared based on a common platform, e.g. type of crash. They are: 1) Multiple and Single vehicle crashes, 2) Peak and Off-peak crashes, 3) Dry and Wet pavement crashes, 4) Daytime and Dark hour crashes, and 5) Property Damage Only (PDO) and Injury crashes. Each of the above mentioned models in each category are estimated separately. To account for the correlation between the disturbance terms arising from omitted variables between any two models in a category, seemingly unrelated negative binomial (SUNB) regression was used, and then the models in each category were estimated simultaneously. SUNB estimation proved to be advantageous for two categories: Category 1, and Category 4. Road curvature and presence of On-ramps/Off-ramps were found to be the important factors, which can be related to every crash category. AADT was also found to be significant in all the models except for the single vehicle crash model. Median type and pavement surface type were among the other important factors causing crashes. It can be stated that the group of factors found in the model considering all crashes is a superset of the factors that were found in individual crash categories. The third analysis dealt with the development of a logistic regression model to obtain the weather condition at a given time and location on I-4 in Central Florida so that this information can be used in traffic safety analyses, because of the lack of weather monitoring stations in the study area. To prove the worthiness of the weather information obtained form the analysis, the same weather information was used in a safety model developed by Abdel-Aty et al., 2004. It was also proved that the inclusion of weather information actually improved the safety model with better prediction accuracy.
247

Maintenance of cube automatic summary tables

Lehner, Wolfgang, Sidle, Richard, Hamid, Wolfgang Cochrane, Roberta 10 January 2023 (has links)
Materialized views (or Automatic Summary Tables—ASTs) are commonly used to improve the performance of aggregation queries by orders of magnitude. In contrast to regular tables, ASTs are synchronized by the database system. In this paper, we present techniques for maintaining cube ASTs. Our implementation is based on IBM DB2 UDB.
248

Effective schools and learner's achievement in Botswana secondary schools : an education management perspective

Mohiemang, Irene Lemphorwana 11 1900 (has links)
This thesis describes the background and findings of a study of effective schools and learners achievement in Botswana senior secondary schools from an education management perspective. The aim was to identify schools that promote learners’ achievement when the students’ initial intakes were considered. The study was guided by five research questions. The study adopted an ex post facto design and a quantitative value added methodology to answer the research questions. Simple random sampling was used to select a sample of 5662 from the population of 58 032 students who wrote the BGCSE examinations for 2005, 2006 and 2007. Two sets of data: prior and later achievements at individual student level were collected from BEC and Secondary Education. The statistical software, MLwiN 2.10 beta 4, which is based on hierarchical linear modelling or multilevel modelling, was used to analyse the data for the value added by schools. The findings indicated that a) schools differ in their effectiveness. Some schools were more effective than others; b) Ten characteristics of effective schools were identified from the literature review c) schools differed in their consistency across the three core curriculum areas of Setswana, English and Mathematics; d) schools differed in their stability from year to year and e) schools were differentially effective. They were effective for the mid ability students and boys more than the other groups. The study confirmed that the use of a single statistic measure even in value added analysis could be misleading because of the internal variations between departments in schools. Furthermore, the uses of raw results for measuring school effectiveness were misleading. Some schools which were at the top in raw results were not doing so well in terms of value added and vice versa. The value added measures of school performance proved to be the most appropriate measure of school’s contribution to students’ learning. The value added by schools is also a measure of schools’ productivity. The study made recommendations to improve practice, such as the use of appropriate and fairer methods to evaluate and compare schools. The areas that need further attention were suggested based on the findings of the study. / Teacher Education / D.Ed. (Education Management)
249

Some Climatic Aspects of Tree Growth in Alaska

Giddings, J. L., Jr. 04 1900 (has links)
No description available.
250

Automatic validation and optimisation of biological models

Cooper, Jonathan Paul January 2009 (has links)
Simulating the human heart is a challenging problem, with simulations being very time consuming, to the extent that some can take days to compute even on high performance computing resources. There is considerable interest in computational optimisation techniques, with a view to making whole-heart simulations tractable. Reliability of heart model simulations is also of great concern, particularly considering clinical applications. Simulation software should be easily testable and maintainable, which is often not the case with extensively hand-optimised software. It is thus crucial to automate and verify any optimisations. CellML is an XML language designed for describing biological cell models from a mathematical modeller’s perspective, and is being developed at the University of Auckland. It gives us an abstract format for such models, and from a computer science perspective looks like a domain specific programming language. We are investigating the gains available from exploiting this viewpoint. We describe various static checks for CellML models, notably checking the dimensional consistency of mathematics, and investigate the possibilities of provably correct optimisations. In particular, we demonstrate that partial evaluation is a promising technique for this purpose, and that it combines well with a lookup table technique, commonly used in cardiac modelling, which we have automated. We have developed a formal operational semantics for CellML, which enables us to mathematically prove the partial evaluation of CellML correct, in that optimisation of models will not change the results of simulations. The use of lookup tables involves an approximation, thus introduces some error; we have analysed this using a posteriori techniques and shown how it may be managed. While the techniques could be applied more widely to biological models in general, this work focuses on cardiac models as an application area. We present experimental results demonstrating the effectiveness of our optimisations on a representative sample of cardiac cell models, in a variety of settings.

Page generated in 0.096 seconds