• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 447
  • 103
  • 99
  • 49
  • 43
  • 20
  • 17
  • 14
  • 11
  • 10
  • 7
  • 7
  • 6
  • 6
  • 4
  • Tagged with
  • 943
  • 165
  • 128
  • 106
  • 100
  • 96
  • 94
  • 94
  • 92
  • 88
  • 80
  • 73
  • 70
  • 70
  • 67
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Power-Aware Routing in Networks

Das, Dibakar 2011 August 1900 (has links)
The objective of this work is to develop a scheme to minimize a combination of power consumption and congestion delay in communication networks. I model the network as a set of parallel links, with flows that are able to divide their traffic among the links available to them. Power consumption at each link is concave and increasing in the load, with a non-zero intercept at the origin corresponding to idle power consumption. I believe it is possible to minimize the overall power consumption by possibly sharing links and shutting down the idle links, as long as it does not lead to significant congestion in the network. In this project, I focus on developing incentives for flows to choose the minimum cost solution. My solutions involve two elements - (i) a mypopic and selfish controller adopted by each source, which attempts to minimize cost seen by that flow, and (ii) a pricing scheme at each link whose objective is to provide appropriate signals to the controllers at the source. I use ideas drawn from population games to choose the set of source controllers, while I experiment with using marginal costs and weighted Shapley values for the pricing scheme. I show that the weighted Shapley value as a pricing scheme is superior to that of marginal cost pricing in some simple cases.
22

Surface pressure and seated discomfort

Shen, Wenqi January 1994 (has links)
This thesis presents experimental studies on the relationship between external surface pressure and the perceived discomfort in seated body areas, in particular those under the ischial tuberosity and the mid-thigh. It consists of three parts. Part one provides a comprehensive review of the existing knowledge concerning seated discomfort. The current assessment methods of seated discomfort are summarised, with the emphasis on the validity and reliability of the rating scale methods. The implications of surface pressure to seated people are outlined from the perspective of clinical, sensory and perceptual, and ergonomics domains. A brief review of current technologies for pressure measurement is also provided. Part two presents the experimental work. It starts with an exploratory assessment model of seated discomfort, based on pressure measures. Two preliminary experiments were conducted to test the feasibility of the model. Three further psychophysical experiments were carried out to test the validity and reliability of the selected six rating scales, and to investigate the effects of surface pressure levels on perceived pressure intensity and discomfort in the seated mid-thigh and ischial tuberosity areas. Surface pressure stimuli were applied to a seated body area of 3,318 mm2• Subjects judged three items of sensations: pressure intensity, local discomfort, and the overall discomfort. The main results are: I) A 50-point category partitioning scale was identified to be most sensitive and reliable for scaling pressure intensity and discomfort; 2) Sensations of pressure intensity and discomfort linearly increase with the logarithm of the pressure stimulus level; 3) Thresholds for pressure intensity and discomfort in the seated ischium and thigh areas were derived; 4) The sensitivity of intensity and discomfort to the stimuli differs between the locations .The mid-thigh is more sensitive to surface pressure than the ischium. It is considered that this is due to differences in load adaptation, body tissue composition and deformation; 5) Local pressure discomfort dominates the overall discomfort, and ratings of the local discomfort are higher than those of overall discomfort. Part three discusses the findings from this research. Four integration models of the overall discomfort from local discomfort components were proposed. The Weighted Average model asserts that the overall discomfort is a linear combination of local discomfort components, and that the weight of each local discomfort is the proportion of this component out of the arithmetic sum of all local discomfort components. The mechanisms of discomfort were analysed. The fundamental research presented herein uniquely contributes to the knowledge on the human perception of seated pressure discomfort. Although this is not application based, the findings contribute to the methods of seating comfort evaluation as well as provide criteria by which seat designers may formulate design requirements.
23

Predicting Hurricane Evacuation Decisions: When, How Many, and How Far

Huang, Lixin 20 June 2011 (has links)
Traffic from major hurricane evacuations is known to cause severe gridlocks on evacuation routes. Better prediction of the expected amount of evacuation traffic is needed to improve the decision-making process for the required evacuation routes and possible deployment of special traffic operations, such as contraflow. The objective of this dissertation is to develop prediction models to predict the number of daily trips and the evacuation distance during a hurricane evacuation. Two data sets from the surveys of the evacuees from Hurricanes Katrina and Ivan were used in the models' development. The data sets included detailed information on the evacuees, including their evacuation days, evacuation distance, distance to the hurricane location, and their associated socioeconomic characteristics, including gender, age, race, household size, rental status, income, and education level. Three prediction models were developed. The evacuation trip and rate models were developed using logistic regression. Together, they were used to predict the number of daily trips generated before hurricane landfall. These daily predictions allowed for more detailed planning over the traditional models, which predicted the total number of trips generated from an entire evacuation. A third model developed attempted to predict the evacuation distance using Geographically Weighted Regression (GWR), which was able to account for the spatial variations found among the different evacuation areas, in terms of impacts from the model predictors. All three models were developed using the survey data set from Hurricane Katrina and then evaluated using the survey data set from Hurricane Ivan. All of the models developed provided logical results. The logistic models showed that larger households with people under age six were more likely to evacuate than smaller households. The GWR-based evacuation distance model showed that the household with children under age six, income, and proximity of household to hurricane path, all had an impact on the evacuation distances. While the models were found to provide logical results, it was recognized that they were calibrated and evaluated with relatively limited survey data. The models can be refined with additional data from future hurricane surveys, including additional variables, such as the time of day of the evacuation.
24

Algorithmic Properties of Transducers

Jecker, Ismaël Robin 23 April 2019 (has links) (PDF)
In this thesis, we consider three fundamental problems of transducers theory. The containment problem asks, given two transducers,whether the relation defined by the first is included into the relation defined by the second. The equivalence problem asks, given two transducers,whether they define the same relation. Finally, the sequential uniformisation problem,corresponding to the synthesis problem in the setting of transducers,asks, given a transducer, whether it is possible to deterministically pick an output correspondingto each input of its domain. These three decision problems are undecidable in general. As a first step, we consider different manners of recovering the decidability of the three problems considered.First, we characterise a family of classes of transducers, called controlled by effective languages, for which the containment and equivalence problems are decidable. Second, we add structural constraints to the problems considered: for instance, instead of only asking that two transducers define the same relation, we require that this relation is defined by both transducers in a similar way. This `similarity' is formalised through the notion of delay,used to measure the difference between the output production of two transducers. This allows us to introduce stronger decidable versions of our three decision problems, which we use to prove the decidability of the original problems in the setting of finite-valued transducers. In the second part, we study extensions of the automaton model,together with the adaptation of the sequential uniformisation problems to these new settings.Weighted automata are automata which,along each transition, output a weight in Z. Then, whereas a transducer preserves all the output mapped to a given input, weighted automata only preserve the maximal weight. In this setting, the sequential uniformisation problem turns into the determinisation problem: given a weighted automaton, is it possible to deterministically pick the maximal output mapped to each input? The decidability of this problem is open.The notion of delay allows us to devise a complete semi-algorithm deciding it. Finally, we consider two-way transducers, that are allowed to move back and forth over the input tape. These transducers enjoy good properties with respect to the sequential uniformisation problem: every transducer admits a sequential two-way uniformiser. We strengthen this result by showing that every transducer admits a reversible two-way uniformiser, i.e. a uniformiser that is both sequential and cosequential (backward sequential). / Doctorat en Sciences / info:eu-repo/semantics/nonPublished
25

A Priori Error Analysis For A Penalty Finite Element Method

Zerbinati, Umberto 04 April 2022 (has links)
Partial differential equations on domains presenting point singularities have always been of interest for applied mathematicians; this interest stems from the difficulty to prove regularity results for non-smooth domains, which has important consequences in the numerical solution of partial differential equations. In my thesis I address those consequences in the case of conforming and penalty finite element methods. The main results here contained concerns a priori error estimates for conforming and penalty finite element methods with respect to the energy norm, the $\mathcal{L}^2(\Omega)$ norm in both the standard and weighted setting.
26

A Modified Cluster-Weighted Approach to Nonlinear Time Series

Lyman, Mark Ballatore 11 July 2007 (has links) (PDF)
In many applications involving data collected over time, it is important to get timely estimates and adjustments of the parameters associated with a dynamic model. When the dynamics of the model must be updated, time and computational simplicity are important issues. When the dynamic system is not linear the problem of adaptation and response to feedback are exacerbated. A linear approximation of the process at various levels or “states” may approximate the non-linear system. In this case the approximation is linear within a state and transitions from state to state over time. The transition probabilities are parametrized as a Markov chain, and the within-state dynamics are modeled by an AR time series model. However, in order to make the estimates available almost instantaneously, least squares and weighted least squares estimates are used. This is a modification of the cluster-weighted models proposed by Gershenfeld, Schoner, and Metois (1999). A simulation study compares the models and explores the adequacy of least squares estimators.
27

Cluster-Weighted Models with Changepoints

Roopnarine, Cameron January 2023 (has links)
A flexible family of mixture models known as cluster-weighted models (CWMs) arise when the joint distribution of a response variable and a set of covariates can be modelled by a weighted combination of several component distributions. We introduce an extension to CWMs where changepoints are present. Similar to the finite mixture of regressions (FMR) with changepoints, CWMs with changepoints are more flexible than standard CWMs if we believe that changepoints are present within the data. We consider changepoints within the linear Gaussian CWM, where both the marginal and conditional densities are assumed to be Gaussian. Furthermore, we consider changepoints within the Poisson and Binomial CWM. Model parameter estimation and performance of some information criteria are investigated through simulation studies and two real-world datasets. / Thesis / Master of Science (MSc)
28

Optimal weight settings in locally weighted regression: A guidance through cross-validation approach

Puri, Roshan January 2023 (has links)
Locally weighted regression is a powerful tool that allows the estimation of different sets of coefficients for each location in the underlying data, challenging the assumption of stationary regression coefficients across a study region. The accuracy of LWR largely depends on how a researcher establishes the relationship across locations, which is often constructed using a weight matrix or function. This paper explores the different kernel functions used to assign weights to observations, including Gaussian, bi-square, and tri-cubic, and how the choice of weight variables and window size affects the accuracy of the estimates. We guide this choice through the cross-validation approach and show that the bi-square function outperforms the choice of other kernel functions. Our findings demonstrate that an optimal window size for LWR models depends on the cross-validation (CV) approach employed. In our empirical application, the full-sample CV guides the choice of a higher window-size case, and CV by proxy guides the choice of a lower window size. Since the CV by Proxy approach focuses on the predictive ability of the model in the vicinity of one specific point (usually a policy point/site), we note that guiding a model choice through this approach makes more intuitive sense when the aim of the researcher is to predict the outcome in one specific site (policy or target point). To identify the optimal weight variables, while we suggest exploring various combinations of weight variables, we argue that an efficient alternative is to merge all continuous variables in the dataset into a single weight variable. / M.A. / Locally weighted regression (LWR) is a statistical technique that establishes a relationship between dependent and explanatory variables, focusing primarily on data points in proximity to a specific point of interest/target point. This technique assigns varying degrees of importance to the observations that are in proximity to the target point, thereby allowing for the modeling of relationships that may exhibit spatial variability within the dataset. The accuracy of LWR largely depends on how researchers define relationships across different locations/studies, which is often done using a “weight setting”. We define weight setting as a combination of weight functions (determines how the observations around a point of interest are weighted before they enter the model), weight variables (determines proximity between the point of interest and all other observations), and window sizes (determines the number of observations that can be allowed in the local regression). To find which weight setting is an optimal one or which combination of weight functions, weight variables, and window sizes generates the lowest predictive error, researchers often employ a cross-validation (CV) approach. Cross-validation is a statistical method used to assess and validate the performance of a predictive model. It entails removing a host observation (a point of interest), predicting that point, and evaluating the accuracy of such predicted point by comparing it with its actual value. In our study, we employ two CV approaches. The first one is a full-sample CV approach, where we remove a host observation, and predict it using the full set of observations used in the given local regression. The second one is the CV by proxy approach, which uses a similar mechanism as full-sample CV to check the accuracy of the prediction, however, by focusing only on the vicinity points that share similar characteristics as a target point. We find that the bi-square function consistently outperforms the choice of Gaussian and tri-cubic weight functions, regardless of the CV approaches. However, the choice of an optimal window size in LWR models depends on the CV approach that we employ. While the full-sample CV method guides us toward the selection of a larger window size, the CV by proxy directs us toward a smaller window size. In the context of identifying the optimal weight variables, we recommend exploring various combinations of weight variables. However, we also propose an efficient alternative, which involves using all continuous variables within the dataset into a single-weight variable instead of striving to identify the best of thousands of different weight variable settings.
29

Cops and Robbers in Cincinnati: A Spatial Modeling Approach for Examining the Effects of Aggressive Policing

Hall, Davin 05 October 2007 (has links)
No description available.
30

Design support for biomolecular systems

Desai, Amruta 09 April 2010 (has links)
No description available.

Page generated in 0.0447 seconds