• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 93
  • 28
  • 18
  • 12
  • 11
  • 8
  • 7
  • 3
  • 2
  • 2
  • 2
  • 2
  • 1
  • 1
  • 1
  • Tagged with
  • 209
  • 45
  • 38
  • 32
  • 30
  • 29
  • 22
  • 22
  • 21
  • 18
  • 18
  • 17
  • 17
  • 17
  • 17
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
61

Wrapping XML-Sources to Support Update Awareness

Thuresson, Marcus January 2000 (has links)
<p>Data warehousing is a generally accepted method of providing corporate decision support. Today, the majority of information in these warehouses originates from sources within a company, although changes often occur from the outside. Companies need to look outside their enterprises for valuable information, increasing their knowledge of customers, suppliers, competitors etc.</p><p>The largest and most frequently accessed information source today is the Web, which holds more and more useful business information. Today, the Web primarily relies on HTML, making mechanical extraction of information a difficult task. In the near future, XML is expected to replace HTML as the language of the Web, bringing more structure and content focus.</p><p>One problem when considering XML-sources in a data warehouse context is their lack of update awareness capabilities, which restricts eligible data warehouse maintenance policies. In this work, we wrap XML-sources in order to provide update awareness capabilities.</p><p>We have implemented a wrapper prototype that provides update awareness capabilities for autonomous XML-sources, especially change awareness, change activeness, and delta awareness. The prototype wrapper complies with recommendations and working drafts proposed by W3C, thereby being compliant with most off-the-shelf XML tools. In particular, change information produced by the wrapper is based on methods defined by the DOM, implying that any DOM-compliant software, including most off-the-shelf XML processing tools, can be used to incorporate identified changes in a source into an older version of it.</p><p>For the delta awareness capability we have investigated the possibility of using change detection algorithms proposed for semi-structured data. We have identified similarities and differences between XML and semi-structured data, which affect delta awareness for XML-sources. As a result of this effort, we propose an algorithm for change detection in XML-sources. We also propose matching criteria for XML-documents, to which the documents have to conform to be subject to change awareness extension.</p>
62

Staffing service centers under arrival-rate uncertainty

Zan, Jing, 1983- 13 July 2012 (has links)
We consider the problem of staffing large-scale service centers with multiple customer classes and agent types operating under quality-of-service (QoS) constraints. We introduce formulations for a class of staffing problems, minimizing the cost of staffing while requiring that the long-run average QoS achieves a certain pre-specified level. The queueing models we use to define such service center staffing problems have random inter-arrival times and random service times. The models we study differ with respect to whether the arrival rates are deterministic or stochastic. In the deterministic version of the service center staffing problem, we assume that the customer arrival rates are known deterministically. It is computationally challenging to solve our service center staffing problem with deterministic arrival rates. Thus, we provide an approximation and prove that the solution of our approximation is asymptotically optimal in the sense that the gap between the optimal value of the exact model and the objective function value of the approximate solution shrinks to zero as the size of the system grows large. In our work, we also focus on doubly stochastic service center systems; that is, we focus on solving large-scale service center staffing problems when the arrival rates are uncertain in addition to the inherent randomness of the system's inter-arrival times and service times. This brings the modeling closer to reality. In solving the service center staffing problems with deterministic arrival rates, we provide a solution procedure for solving staffing problems for doubly stochastic service center systems. We consider a decision making scheme in which we must select staffing levels before observing the arrival rates. We assume that the decision maker has distributional information about the arrival rates at the time of decision making. In the presence of arrival-rate uncertainty, the decision maker's goal is to minimize the staffing cost, while ensuring the QoS achieves a given level. We show that as the system scales large in size, there is at most one key scenario under which the probability of waiting converges to a non-trivial value, i.e., a value strictly between 0 and 1. That is, the system is either over- or under-loaded in any other scenario as the size of the system grows to infinity. Exploiting this result, we propose a two-step solution procedure for the staffing problem with random arrival rates. In the first step, we use the desired QoS level to identify the key scenario corresponding to the optimal staffing level. After finding the key scenario, the random arrival-rate model reduces to a deterministic arrival-rate model. In the second step, we solve the resulting model, with deterministic arrival rate, by using the approximation model we point to above. The approximate optimal staffing level obtained in this procedure asymptotically converges to the true optimal staffing level for the random arrival-rate problem. The decision making scheme we sketch above, assumes that the distribution of the random arrival rates is known at the time of decision making. In reality this distribution must be estimated based on historical data and experience, and needs to be updated as new observations arrive. Another important issue that arises in service center management is that in the daily operation in service centers, the daily operational period is split into small decision time periods, for example, hourly periods, and then the staffing decisions need to be made for all such time periods. Thus, to achieve an overall optimal daily staffing policy, one must deal with the interaction among staffing decisions over adjacent time periods. In our work, we also build a model that handles the above two issues. We build a two-stage stochastic model with recourse that provides the staffing decisions over two adjacent decision time periods, i.e., two adjacent decision stages. The model minimizes the first stage staffing cost and the expected second stage staffing cost while satisfying a service quality constraint on the second stage operation. A Bayesian update is used to obtain the second-stage arrival-rate distribution based on the first-stage arrival-rate distribution and the arrival observations in the first stage. The second-stage distribution is used in the constraint on the second stage service quality. After reformulation, we show that our two-stage model can be expressed as a newsvendor model, albeit with a demand that is derived from the first stage decision. We provide an algorithm that can solve the two-stage staffing problem under the most commonly used QoS constraints. This work uses stochastic programming methods to solve problems arising in queueing networks. We hope that the ideas that we put forward in this dissertation lead to other attempts to deal with decision making under uncertainty for queueing systems that combine techniques from stochastic programming and analysis tools from queueing theory. / text
63

Exploiting Tracking Area List Concept in LTE Networks

Nawaz, Mohsin January 2013 (has links)
Signaling Overhead has always been a concern for network operators. LTE offers many improvements aimed at improved network performance and management. This thesis exploit Tracking Area List (TAL) concept in LTE networks. An algorithm to design TAL using UE traces is developed. The performance of TAL design is compared to conventional TA design. Performance is also compared with rule of thumb TAL design which is another approach to designing TAL
64

Duomenų atnaujinimo lygiagretumo konfliktų sprendimas prekybos ir klientų aptarnavimo sistemose / Data update concurrency conflict sollutions in commerce and customer service systems

Kėsas, Marius 26 May 2004 (has links)
There are many benefits to upgrading your data access layer to ADO.NET, most of which involve using the intrinsic DataSet object. The DataSet object is basically a disconnected, in-memory replica of a database. DataSets provide many benefits, but also present a few challenges. Specifically, you can run into problems related to data concurrency exceptions. I've created a simple Windows® Forms customer service application that illustrates the potential pitfalls of this particular problem. I'll walk you through my research and show you ways to overcome the data concurrency issues that arose. DataSets provide a number of benefits. For example, you gain the ability to enforce rules of integrity in memory rather than at the database level. The most important benefit of using DataSets, however, is improved performance. Since the DataSet is disconnected from the underlying database, your code will make fewer calls to the database, significantly boosting performance. As with most performance optimizations, this one comes with a price. Since the DataSet object is disconnected from the underlying database, there is always a chance that the data is out of date. Since a DataSet doesn't hold live data, but rather a snapshot of live data at the time the DataSet was filled, problems related to data concurrency can occur.
65

Data Privacy Preservation in Collaborative Filtering Based Recommender Systems

Wang, Xiwei 01 January 2015 (has links)
This dissertation studies data privacy preservation in collaborative filtering based recommender systems and proposes several collaborative filtering models that aim at preserving user privacy from different perspectives. The empirical study on multiple classical recommendation algorithms presents the basic idea of the models and explores their performance on real world datasets. The algorithms that are investigated in this study include a popularity based model, an item similarity based model, a singular value decomposition based model, and a bipartite graph model. Top-N recommendations are evaluated to examine the prediction accuracy. It is apparent that with more customers' preference data, recommender systems can better profile customers' shopping patterns which in turn produces product recommendations with higher accuracy. The precautions should be taken to address the privacy issues that arise during data sharing between two vendors. Study shows that matrix factorization techniques are ideal choices for data privacy preservation by their nature. In this dissertation, singular value decomposition (SVD) and nonnegative matrix factorization (NMF) are adopted as the fundamental techniques for collaborative filtering to make privacy-preserving recommendations. The proposed SVD based model utilizes missing value imputation, randomization technique, and the truncated SVD to perturb the raw rating data. The NMF based models, namely iAux-NMF and iCluster-NMF, take into account the auxiliary information of users and items to help missing value imputation and privacy preservation. Additionally, these models support efficient incremental data update as well. A good number of online vendors allow people to leave their feedback on products. It is considered as users' public preferences. However, due to the connections between users' public and private preferences, if a recommender system fails to distinguish real customers from attackers, the private preferences of real customers can be exposed. This dissertation addresses an attack model in which an attacker holds real customers' partial ratings and tries to obtain their private preferences by cheating recommender systems. To resolve this problem, trustworthiness information is incorporated into NMF based collaborative filtering techniques to detect the attackers and make reasonably different recommendations to the normal users and the attackers. By doing so, users' private preferences can be effectively protected.
66

Analyse und Vorhersage der Aktualisierungen von Web-Feeds

Reichert, Sandro 14 March 2012 (has links) (PDF)
Feeds werden unter anderem eingesetzt, um Nutzer in einem einheitlichen Format und in aggregierter Form über Aktualisierungen oder neue Beiträge auf Webseiten zu informieren. Da bei Feeds in der Regel keine Benachrichtigungsfunktionalitäten angeboten werden, müssen Interessenten Feeds regelmäßig auf Aktualisierungen überprüfen. Die Betrachtung entsprechender Techniken bildet den Kern der Arbeit. Die in den verwandten Domänen Web Crawling und Web Caching eingesetzten Algorithmen zur Vorhersage der Zeitpunkte von Aktualisierungen werden aufgearbeitet und an die spezifischen Anforderungen der Domäne Feeds angepasst. Anschließend wird ein selbst entwickelter Algorithmus vorgestellt, der bereits ohne den Einsatz spezieller Konfigurationsparameter und ohne Trainingsphase im Durchschnitt bessere Vorhersagen trifft, als die übrigen betrachteten Algorithmen. Auf Basis der Analyse verschiedener Metriken zur Beurteilung der Qualität von Vorhersagen erfolgt die Definition eines zusammenfassenden Gütemaßes, welches den Vergleich von Algorithmen anhand eines einzigen Wertes ermöglicht. Darüber hinaus werden abfragespezifische Attribute der Feed-Formate untersucht und es wird empirisch gezeigt, dass die auf der partiellen Historie der Feeds basierende Vorhersage von Änderungen bereits bessere Ergebnisse erzielt, als die Einbeziehung der von den Diensteanbietern bereitgestellten Werte in die Berechnung ermöglicht. Die empirischen Evaluationen erfolgen anhand eines breitgefächerten, realen Feed-Datensatzes, welcher der wissenschaftlichen Gemeinschaft frei zur Verfügung gestellt wird, um den Vergleich mit neuen Algorithmen zu erleichtern.
67

Machine Learning Methods for Annual Influenza Vaccine Update

Tang, Lin 26 April 2013 (has links)
Influenza is a public health problem that causes serious illness and deaths all over the world. Vaccination has been shown to be the most effective mean to prevent infection. The primary component of influenza vaccine is the weakened strains. Vaccination triggers the immune system to develop antibodies against those strains whose viral surface glycoprotein hemagglutinin (HA) is similar to that of vaccine strains. However, influenza vaccine must be updated annually since the antigenic structure of HA is constantly mutation. Hemagglutination inhibition (HI) assay is a laboratory procedure frequently applied to evaluate the antigenic relationships of the influenza viruses. It enables the World Health Organization (WHO) to recommend appropriate updates on strains that will most likely be protective against the circulating influenza strains. However, HI assay is labour intensive and time-consuming since it requires several controls for standardization. We use two machine-learning methods, i.e. Artificial Neural Network (ANN) and Logistic Regression, and a Mixed-Integer Optimization Model to predict antigenic variety. The ANN generalizes the input data to patterns inherent in the data, and then uses these patterns to make predictions. The logistic regression model identifies and selects the amino acid positions, which contribute most significantly to antigenic difference. The output of the logistic regression model will be used to predict the antigenic variants based on the predicted probability. The Mixed-Integer Optimization Model is formulated to find hyperplanes that enable binary classification. The performances of our models are evaluated by cross validation.
68

Generalisierte Informationsstrukturen für Anwendungen im Bauwesen

Bilchuk, Irina. Unknown Date (has links) (PDF)
Techn. Universiẗat, Diss., 2005--Berlin.
69

Teoria da relatividade restrita : uma introdução histórico-epistemológica e conceitual voltada ao ensino médio

Fuchs, Eduardo Ismael January 2016 (has links)
Este trabalho é a narrativa de uma experiência didática de aplicação de um módulo que abordou um tópico de Física Moderna e Contemporânea, a Teoria da Relatividade Restrita, no Ensino Médio. A proposta foi aplicada em uma escola particular situada no município de Arroio do Meio, RS, em uma turma de terceira série do nível médio regular, sob o referencial teórico da teoria cognitiva de Jean William Fritz Piaget (1896-1980) e sob o referencial epistemológico de Thomas Samuel Kuhn (1922-1996). Descreve-se o planejamento das aulas, a implementação da proposta e os resultados obtidos com sua aplicação em sala de aula na modalidade de um curso extraclasse. A forma como o módulo foi pensado e o nível de profundidade que foi possível alcançar aparecem ao longo do texto, que também oferece uma revisão da literatura em que a relevância da inclusão da Física Moderna e Contemporânea no currículo do Ensino Médio é discutida. Os resultados indicam que é possível trabalhar tópicos de Física Moderna e Contemporânea no ensino regular, que os alunos apreciaram e mostraram disposição para aprender assuntos atuais e que o esforço para introduzir pequenas atualizações curriculares é válido e precisa ser incentivado como uma das possíveis alternativas para se alcançar a melhoria de qualidade de ensino na Educação Básica. Ao final, um produto educacional em formato de texto de apoio, orientação e motivação aos professores de Física é apresentado. / This work is the narrative of a didactic experience of application of a module that addressed a topic of Modern and Contemporary Physics, the Special Theory of Relativity, in High School. The proposal was applied in a private school located in the county of Arroio do Meio, RS, for one group of third grade of regular secondary level, under the theoretical framework of cognitive theory of Jean William Fritz Piaget (1896-1980) and under the epistemological framework of Thomas Samuel Kuhn (1922-1996). It describes the planning of classes, the implementation of the proposal and the results obtained from its application in the classroom in the form of an extracurricular course. The way the module has been designed and the level of depth that was achieved is discussed throughout the text, which also provide a review of the literature in which importance of inclusion of Modern and Contemporary Physics in High School curriculum is discussed. The results indicate that it is possible teach subjects of Modern and Contemporary Physics in regular education. The results also indicated that students enjoyed and showed willingness to learn current issues and the effort to introduce small curriculum updates is valid and needs to be encouraged as one of the possible alternatives to achieve the improvement of teaching quality in basic education. At the end, an educational product in form of text, guidance and motivation to physics teachers is presented.
70

Integration with Outlook Calendar

Nisstany, Saman January 2018 (has links)
This report will be covering further development of the Realtime-Updated Dashboard made by two students for Flex Applications in VT2017. The task now is to integrate the Dashboard with Outlook calendar. A theoretical deepening into the General Data Protection Regulation was made due to recent development in the European Union. This was used to set strict guidelines for design, consent and security of the application.   The application is a back-end service written mostly in C#, however some TypeScript language was used with Angular 2 framework along with LESS and HTML5. The application is developed as a stand-alone project as the Realtime-Updated Dashboard is now live in the system and it would pose a security risk for Flex which was a great opportunity to study and learn more about ASP.NET MVC model along with TypeScript, Angular2, LESS and HTML5. Integrating with Outlook calendar was just the first step, more calendars will be added in time. The main point of the application is to give the Realtime-Updated Dashboard added value and prove/show integration with Outlook calendar was possible.

Page generated in 0.1065 seconds