• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 101
  • 54
  • 20
  • 7
  • 5
  • 4
  • 4
  • 4
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 237
  • 77
  • 31
  • 28
  • 28
  • 26
  • 26
  • 25
  • 21
  • 20
  • 17
  • 17
  • 16
  • 16
  • 16
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
31

RELPH: A Computational Model for Human Decision Making

Mohammadi Sepahvand, Nazanin January 2013 (has links)
The updating process, which consists of building mental models and adapting them to the changes occurring in the environment, is impaired in neglect patients. A simple rock-paper-scissors experiment was conducted in our lab to examine updating impairments in neglect patients. The results of this experiment demonstrate a significant difference between the performance of healthy and brain damaged participants. While healthy controls did not show any difficulty learning the computer’s strategy, right brain damaged patients failed to learn the computer’s strategy. A computational modeling approach is employed to help us better understand the reason behind this difference and thus learn more about the updating process in healthy people and its impairment in right brain damaged patients. Broadly, we hope to learn more about the nature of the updating process, in general. Also the hope is that knowing what must be changed in the model to “brain-damage” it can shed light on the updating deficit in right brain damaged patients. To do so I adapted a pattern detection method named “ELPH” to a reinforcement-learning human decision making model called “RELPH”. This model is capable of capturing the behavior of both healthy and right brain damaged participants in our task according to our defined measures. Indeed, this thesis is an effort to discuss the possible differences among these groups employing this computational model.
32

Mobil klientsäkerhet

Krigh, Oskar, Lewin, Markus January 2008 (has links)
Denna rapport har i syfte att öka förståelsen om hur man kan öka säkerheten på ett företag som använder sig av mobila klienter. Att arbeta på distans blir allt vanligare, och med de så ställs allt högre krav på tekniken som tillhandahåller möjligheterna för detta. När ny teknik utvecklas kommer ofta säkerheten i andra hand, det man fokuserar på är funktionaliteten. Frågorna man då ställer sig och vilket läsaren kommer skaffa sig mer kunskap om under rapprotens gång är: Hur skall en klient kunna hållas uppdaterad när den är utanför företagets väggar? Hur skall kommunikation kunna ske med företaget på ett enkelt sätt, samtidigt som det också ska vara säkert? Det kommer visa sig att administratören inte är den pusselbiten i säkerheten på ett företag. Företaget kan ha en mycket gedigen och bra säkerhetspolicy, men med en användare som handskas vårdslöst med sina användaruppgifter så spelar inte denna någon roll. / The aim of this paper is to give new information on how to increase the security at a company that uses mobile clients. Tele-work and E-work is becoming more common, therefore, the technology used must have the utmost reliability. In general, when technology is being developed, the primary focus is on the functionality of the product and less on the aspect of security. Key questions that will be raised and answered are: How should a mobile client be kept up to date with a company’s current software updates? How can this type of communication be achieved in a simple and secure way? Furthermore, the paper will illustrate that the success rate of all levels of security in a company does not depend solely on the administrator. Without the proper use of the security policy by the user, its reliability decreases.
33

Bringing the history of fashion up-to-date; towards a model for temporal adatation in translation.

Svanberg, Kerstin January 2012 (has links)
In cultural adaptation, the translator has a solid theoretical ground to stand upon; scholars have elaborated strategies that are helpful to this effect. However, there is little research, if any, to rely upon in the matter of temporal adaptation. The aim of this paper is to fill this gap. The primary data used in this translational study consists of an English source text that was published in 2008 and the resulting target text, translated to Swedish in 2012. Hence, in order for the target text to function in its time, there was a four-year long time gap to fill with accurate and relevant data and in a style that would not deviate from the author’s original intentions; the target text needed to be temporally adapted. In what follows, I will suggest a set of strategies for temporal adaptation. The model is elaborated with strategies for cultural adaptation as a starting point and based upon measures taken to relocate the target text to 2012. The suggested strategies are time bridging, updating, adjustment and omission. These four strategies make up the model that I put forward to bridge the theoretical gap that seems to prevail in the matter of temporal adaptation. However, considering that the data used in this study was relatively limited, the applicability of the strategies may be the scope of future studies.
34

Mobility-Matching Key Management for Secure Group Communications in Wireless Networks

Liang, Li-ling 28 July 2006 (has links)
In this thesis, we propose and analyze a multicast key backbone for secure group communications. We also utilize the correlated relationships between the mobile users in the wireless communications networks. When a batch member joins or leaves the group communications, the system has to update and distribute encryption keys to assure that only active members could receive the latest information. In previous tree-based multicast key management schemes, the depth of the key tree is unbounded and analytically deriving the exact value of the corresponding average update cost remains an open problem. And in previous schemes, the different mobile user arrives in and leaves from the system at different time. In contrast, the depth of the proposed multicast key backbone is fixed and the arriving or leaving users are more than one. We utilize these two characteristics and simulate the system to get the average update cost per time unit. We can find that this scheme can improve the efficiency of the system in some special cases when updating the new key.
35

Updating weights of processes for weighted majority decisions in distributed systems

Seedhom, Yousif Faig 24 February 2012 (has links)
In a distributed system many underlying nodes or processes work in tandem to come up with a solution to a given problem. In this report, we are concerned with distributed systems where each node is given the same problem, and the system uses the solutions provided by the nodes to formulate the answer. In our case, the problem is a simple question with two possible answers, and only one answer is correct. The system is asked the question at the beginning of a round. Once the system answers the question, the round is over, and the system is given the correct answer, then another round is started. To answer the question, the system uses the answers from each node, and based on the weight of the individual nodes, it decides on its answer. In this report, we experiment with multiple ways to update the weights of the underlying nodes, and aim to study the impact of certain limitations and parameters imposed on the system; such as the maximum accuracy of the underlying nodes and the number of underlying nodes. / text
36

RELPH: A Computational Model for Human Decision Making

Mohammadi Sepahvand, Nazanin January 2013 (has links)
The updating process, which consists of building mental models and adapting them to the changes occurring in the environment, is impaired in neglect patients. A simple rock-paper-scissors experiment was conducted in our lab to examine updating impairments in neglect patients. The results of this experiment demonstrate a significant difference between the performance of healthy and brain damaged participants. While healthy controls did not show any difficulty learning the computer’s strategy, right brain damaged patients failed to learn the computer’s strategy. A computational modeling approach is employed to help us better understand the reason behind this difference and thus learn more about the updating process in healthy people and its impairment in right brain damaged patients. Broadly, we hope to learn more about the nature of the updating process, in general. Also the hope is that knowing what must be changed in the model to “brain-damage” it can shed light on the updating deficit in right brain damaged patients. To do so I adapted a pattern detection method named “ELPH” to a reinforcement-learning human decision making model called “RELPH”. This model is capable of capturing the behavior of both healthy and right brain damaged participants in our task according to our defined measures. Indeed, this thesis is an effort to discuss the possible differences among these groups employing this computational model.
37

Continuous Model Updating and Forecasting for a Naturally Fractured Reservoir

Almohammadi, Hisham 16 December 2013 (has links)
Recent developments in instrumentation, communication and software have enabled the integration of real-time data into the decision-making process of hydrocarbon production. Applications of real-time data integration in drilling operations and horizontal-well lateral placement are becoming industry common practice. In reservoir management, the use of real-time data has been shown to be advantageous in tasks such as improving smart-well performance and in pressure-maintenance programs. Such capabilities allow for a paradigm change in which reservoir management can be looked at as a strategy that enables a semi-continuous process of model updates and decision optimizations instead of being periodic or reactive. This is referred to as closed-loop reservoir management (CLRM). Due to the complexity of the dynamic physical processes, large sizes, and huge uncertainties associated with reservoir description, continuous model updating is a large-scale problem with a highly dimensional parameter space and high computational costs. The need for an algorithm that is both feasible for practical applications and capable of generating reliable estimates of reservoir uncertainty is a key element in CLRM. This thesis investigates the validity of Markov Chain Monte Carlo (MCMC) sampling used in a Bayesian framework as an uncertainty quantification and model-updating tool suitable for real-time applications. A 3-phase, dual-porosity, dual-permeability reservoir model is used in a synthetic experiment. Continuous probability density functions of cumulative oil production for two cases with different model updating frequencies and reservoir maturity levels are generated and compared to a case with a known geology, i.e., truth case. Results show continuously narrowing ranges for cumulative oil production, with mean values approaching the truth case as model updating advances and the reservoir becomes more mature. To deal with MCMC sampling sensitivity to increasing numbers of observed measurements, as in the case of real-time applications, a new formulation of the likelihood function is proposed. Changing the likelihood function significantly improved chain convergence, chain mixing and forecast uncertainty quantification. Further, methods to validate the sampling quality and to judge the prior model for the MCMC process in real applications are advised.
38

Three Essays on Updating Forecasts in Vector Autoregression Models

Zhu, Hui 30 April 2010 (has links)
Forecasting firms' earnings has long been an interest of market participants and academics. Traditional forecasting studies in a multivariate time series setting do not take into account that the timing of market data release for a specific time period of observation is often spread over several days or weeks. This thesis focuses on the separation of announcement timing or data release and the use of econometric real-time methods, which we refer to as an updated vector autoregression (VAR) forecast, to predict data that have yet to be released. In comparison to standard time series forecasting, we show that the updated forecasts will be more accurate the higher the correlation coefficients among the standard VAR innovations are. Forecasting with the sequential release of information has not been studied in the VAR framework, and our approach to U.S. nonfarm payroll employment and the six Canadian banks shows its value. By using the updated VAR forecast, we conclude that there are relative efficiency gains in the one-step-ahead forecast compared to the ordinary VAR forecast, and compared to professional consensus forecasts. Thought experiments emphasize that the release ordering is crucial in determining forecast accuracy. / Thesis (Ph.D, Economics) -- Queen's University, 2010-04-30 12:34:42.629
39

Real-Time Spatial Object Tracking on iPhone

Heidari, Amin 08 December 2011 (has links)
In this thesis, a novel Object Tracking Algorithm is proposed which tracks objects on Apple iPhone 4 platform in real-time. The system utilizes the colorspace of the frames provided by iPhone camera, in parallel with the motion data provided by iPhone motion sensors, to cancel the effect of iPhone rotations during tracking and matching different candidate tracks. The proposed system also adapts to changes in target appearance and size, thus leading to an object tracking robust to such changes. Several experiments conducted on actual video sequences are used to illustrate the functionality of the proposed approach.
40

Real-Time Spatial Object Tracking on iPhone

Heidari, Amin 08 December 2011 (has links)
In this thesis, a novel Object Tracking Algorithm is proposed which tracks objects on Apple iPhone 4 platform in real-time. The system utilizes the colorspace of the frames provided by iPhone camera, in parallel with the motion data provided by iPhone motion sensors, to cancel the effect of iPhone rotations during tracking and matching different candidate tracks. The proposed system also adapts to changes in target appearance and size, thus leading to an object tracking robust to such changes. Several experiments conducted on actual video sequences are used to illustrate the functionality of the proposed approach.

Page generated in 0.0501 seconds