• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 410
  • 58
  • 47
  • 19
  • 13
  • 11
  • 11
  • 11
  • 11
  • 11
  • 11
  • 7
  • 7
  • 4
  • 3
  • Tagged with
  • 690
  • 132
  • 95
  • 94
  • 76
  • 70
  • 62
  • 59
  • 56
  • 54
  • 46
  • 42
  • 38
  • 37
  • 36
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
651

Regime shifts in the Swedish housing market - A Markov-switching model analysis / Regimskiften pa den svenska bostadsmarknaden - En analys med Markov-switchingmodeller

Stockel, Jakob, Skantz, Niklas January 2016 (has links)
Problem statement: Accurate and reliable forecasts of trends in the housing market can be useful information for market participants as well as policy makers. This information may be useful to minimize risk related to market uncertainty. Since the burst of the housing bubble in the early 1990s the price level of single-family houses has risen sharply in Sweden. The Swedish housing market has experienced an unusually long period of high growth rates in transaction prices which has opened up for discussions about the risk of another housing bubble. Business and property cycles have shown to contain asymmetries, which linear models are unable to pick up and therefore inappropriate to analyze cycles. Approach: Therefore, this study uses non-linear models which are able to pick up the asymmetries. The estimated models are variations of the Markov-switching regression model, i.e. the Markov-switching autoregressive (MS-AR) model and the Markov-switching dynamic regression (MS-DR) model. Results: Our ndings show that the MS-AR(4) model allowing for varying variance across regimes estimated using the growth rate of FASTPI produce superior forecasts over other MSAR models as well as variations of the MS-DR model. The average expected duration to remain in a positive growth regime is between 6.3 and 7.3 years and the average expected duration to remain in a negative growth regime is between 1.2 to 2.5 years. Conclusion: The next regime shift in the Swedish housing market is projected to occur between 2018 and 2019, counting the contraction period in 2012 as the most recent negative regime. Our ndings support other studies ndings which indicate that the longer the market has remained in one state, the greater is the risk for a regime shift. / Problemformulering: Noggranna och tillforlitliga prognoser om utvecklingen pa bostadsmarknaden kan vara anvandbar information for marknadsaktorer samt beslutsfattare. Denna information kan vara anvandbar for att minimera risken relaterad till osakerheten pa marknaden. Sen bostadsbubblan sprack i borjan av 1990-talet har prisnivan for smahus okat kraftigt i Sverige. Den svenska bostadsmarknaden har upplevt en ovanligt lang period av hog tillvaxt i transaktionspriser som har oppnat upp for diskussioner om risken for en ny bostadsbubbla. Konjunkturoch fastighetscykler har visat sig innehalla asymmetrier som linjara modeller inte kan uppfanga och darfor visat sig vara olampliga for att analysera cykler. Tillvagagangssatt: Darfor anvander den har studien icke-linjara modeller som kan uppfanga dessa asymmetrier. De skattade modellerna ar variationer av Hamiltons Markov-switchingmodell, dvs. en autoregressiv Markov-switchingmodell (MS-AR) och en dynamisk Markov-switchingmodell (MS-DR). Resultat: Resultatet visar att MS-AR(4)-modellen som tar hansyn till varierande varians over regimerna estimerad med tillvaxten av FASTPI producerar overlagsna prognoser jamfort med andra MS-AR-modeller samt variationer av MS-DR-modellen. Den genomsnittliga forvantade varaktigheten att benna sig i en positiv regim ar mellan 6,3 och 7,3 ar och den  genomsnittliga forvantade varaktigheten att benna sig i en negativ regim ar mellan 1,2 till 2,5 ar. Slutsats: Nasta regimskifte pa den svenska bostadsmarknaden beraknas ske mellan 2018 och 2019, antaget att nedgangen under 2012 ar den senaste negativa regimen. Resultatet stodjer tidigare studier, som tyder pa att ju langre marknaden har varit i ett tillstand, desto storre ar risken for ett regimskifte.
652

Model Checking Techniques for Design and Analysis of Future Hardware and Software Systems

Märcker, Steffen 12 April 2021 (has links)
Computer hardware and software laid the foundation for fundamental innovations in science, technology, economics and society. Novel application areas generate an ever-increasing demand for computation power and storage capacities. Classic CMOS-based hardware and the von Neumann architecture are approaching their limits in miniaturization, power density and communication speed. To meet future demands, researchers work on new device technologies and architecture approaches which in turn require new algorithms and a hardware/software co-design to exploit their capabilities. Since the overall system heterogeneity and complexity increases, the challenge is to build systems with these technologies that are both correct and performant by design. Formal methods in general and model checking in particular are established verification methods in hardware design, and have been successfully applied to many hardware, software and integrated hardware/software systems. In many systems, probabilistic effects arise naturally, e.g., from input patterns, production variations or the occurrence of faults. Probabilistic model checking facilitates the quantitative analysis of performance and reliability measures in stochastic models that formalize this probabilism. The interdisciplinary research project Center for Advancing Electronics Dresden, cfaed for short, aims to explore hardware and software technologies for future information processing systems. It joins the research efforts of different groups working on technologies for all system layers ranging from transistor device research over system architecture up to the application layer. The collaborations among the groups showed a demand for new formal methods and enhanced tools to assist the design and analysis of technologies at all system layers and their cross-layer integration. Addressing these needs is the goal of this thesis. This work contributes to probabilistic model checking for Markovian models with new methods to compute two essential measures in the analysis of hardware/software systems and a method to tackle the state-space explosion problem: 1) Conditional probabilities are well known in stochastic theory and statistics, but efficient methods did not exist to compute conditional expectations in Markov chains and extremal conditional probabilities in Markov decision processes. This thesis develops new polynomial-time algorithms, and it provides a mature implementation for the probabilistic model checker PRISM. 2) Relativized long-run and relativized conditional long-run averages are proposed in this work to reason about probabilities and expectations in Markov chains on the long run when zooming into sets of states or paths. Both types of long-run averages are implemented for PRISM. 3) Symmetry reduction is an effective abstraction technique to tame the state-space explosion problem. However, state-of-the-art probabilistic model checkers apply it only after building the full model and offer no support for specifying non-trivial symmetric components. This thesis fills this gap with a modeling language based on symmetric program graphs that facilitates symmetry reduction on the source level. The new language can be integrated seamlessly into the PRISM modeling language. This work contributes to the research on future hardware/software systems in cfaed with three practical studies that are enabled by the developed methods and their implementations. 1) To confirm relevance of the new methods in practice and to validate the results, the first study analyzes a well-understood synchronization protocol, a test-and-test-and-set spinlock. Beyond this confirmation, the analysis demonstrates the capability to compute properties that are hardly accessible to measurements. 2) Probabilistic write-copy/select is an alternative protocol to overcome the scalability issues of classic resource-locking mechanisms. A quantitative analysis verifies the protocol's principle of operation and evaluates the performance trade-offs to guide future implementations of the protocol. 3) The impact of a new device technology is hard to estimate since circuit-level simulations are not available in the early stages of research. This thesis proposes a formal framework to model and analyze circuit designs for novel transistor technologies. It encompasses an operational model of electrical circuits, a functional model of polarity-controllable transistor devices and algorithms for design space exploration in order to find optimal circuit designs using probabilistic model checking. A practical study assesses the model accuracy for a lab-device based on germanium nanowires and performs an automated exploration and performance analysis of the design space of a given switching function. The experiments demonstrate how the framework enables an early systematic design space exploration and performance evaluation of circuits for experimental transistor devices.:1. Introduction 1.1 Related Work 2. Preliminaries 3. Conditional Probabilities in Markovian Models 3.1 Methods for Discrete- and Continuous-Time Markov Chains 3.2 Reset Method for Markov Decision Processes 3.3 Implementation 3.4 Evaluation and Comparative Studies 3.5 Conclusion 4. Long-Run Averages in Markov Chains 4.1 Relativized Long-Run Average 4.2 Conditional State Evolution 4.3 Implementation 4.4 Conclusion 5. Language-Support for Immediate Symmetry Reduction 5.1 Probabilistic Program Graphs 5.2 Symmetric Probabilistic Program Graphs 5.3 Implementation 5.4 Conclusion 6. Practical Applications of the Developed Techniques 6.1 Test-and-Test-and-Set Spinlock: Quantitative Analysis of an Established Protocol 6.2 Probabilistic Write/Copy-Select: Quantitative Analysis as Design Guide for a Novel Protocol 6.3 Circuit Design for Future Transistor Technologies: Evaluating Polarity-Controllable Multiple-Gate FETs 7. Conclusion Bibliography Appendices A. Conditional Probabilities and Expectations A.1 Selection of Benchmark Models A.2 Additional Benchmark Results A.3 Comparison PRISM vs. Storm B. Language-Support for Immediate Symmetry Reduction B.1 Syntax of the PRISM Modeling Language B.2 Multi-Core Example C. Practical Applications of the Developed Techniques C.1 Test-and-Test-and-Set Spinlock C.2 Probabilistic Write/Copy-Select C.3 Circuit Design for Future Transistor Technologies
653

Fécondité par rang au sein d’une génération en France et au Québec : e stimation de probabilités d’agrandissement à partir d’un seul recensement

Torres Vasquez, Alexander 06 1900 (has links)
No description available.
654

Venn Prediction for Survival Analysis : Experimenting with Survival Data and Venn Predictors

Aparicio Vázquez, Ignacio January 2020 (has links)
The goal of this work is to expand the knowledge on the field of Venn Prediction employed with Survival Data. Standard Venn Predictors have been used with Random Forests and binary classification tasks. However, they have not been utilised to predict events with Survival Data nor in combination with Random Survival Forests. With the help of a Data Transformation, the survival task is transformed into several binary classification tasks. One key aspect of Venn Prediction are the categories. The standard number of categories is two, one for each class to predict. In this work, the usage of ten categories is explored and the performance differences between two and ten categories are investigated. Seven data sets are evaluated, and their results presented with two and ten categories. For the Brier Score and Reliability Score metrics, two categories offered the best results, while Quality performed better employing ten categories. Occasionally, the models are too optimistic. Venn Predictors rectify this performance and produce well-calibrated probabilities. / Målet med detta arbete är att utöka kunskapen om området för Venn Prediction som används med överlevnadsdata. Standard Venn Predictors har använts med slumpmässiga skogar och binära klassificeringsuppgifter. De har emellertid inte använts för att förutsäga händelser med överlevnadsdata eller i kombination med Random Survival Forests. Med hjälp av en datatransformation omvandlas överlevnadsprediktion till flera binära klassificeringsproblem. En viktig aspekt av Venn Prediction är kategorierna. Standardantalet kategorier är två, en för varje klass. I detta arbete undersöks användningen av tio kategorier och resultatskillnaderna mellan två och tio kategorier undersöks. Sju datamängder används i en utvärdering där resultaten presenteras för två och tio kategorier. För prestandamåtten Brier Score och Reliability Score gav två kategorier de bästa resultaten, medan för Quality presterade tio kategorier bättre. Ibland är modellerna för optimistiska. Venn Predictors korrigerar denna prestanda och producerar välkalibrerade sannolikheter.
655

Влияние принципов поведенческой экономики на формирование предложения в условиях тендерных закупок : магистерская диссертация / The influence of the principles of behavioral economics on the formation of proposals in the context of tender purchases

Яковлева, П. М., Yakovleva, P. M. January 2021 (has links)
В условиях тендерных закупок значимым является учет влияния многих факторов при выборе стратегии ценового предложения участника, которые выходят за пределы классической экономики. Целью магистерской диссертации является разработка модели прогнозирования ценового предложения участников тендерных закупок. В работе рассматривается понятие прогнозирования ценового предложения, влияние факторов на участника тендерных закупок и принципы поведенческой экономики. В качестве источников использовалась научно-исследовательская и методическая литература, нормативно-правовые акты и статистические данные различных электронно-торговых площадок в открытом доступе. В магистерской диссертации была разработана модель прогнозирования ценового предложения участника тендерных закупок, базирующаяся на функции полезности Неймана-Моргенштерна, отличающаяся учетом влияния релевантных факторов, позволяющая корректировать тактику поведения участника для каждого шага торгов и максимизировать полезность предложения с точки зрения принципов поведенческой экономики. / In terms of tender purchases, it is important to take into account the influence of many factors when choosing a bidder's price proposal strategy, which go beyond the classical economy. The aim of the master's thesis is to develop a model for forecasting the price offer of bidders. The paper discusses the concept of forecasting the price offer, the influence of factors on the participant in tender purchases and the principles of behavioral economics. The sources used were scientific research and methodological literature, regulatory legal acts and statistical data of various electronic trading platforms in the public domain. In the master's thesis, a model for predicting the price offer of a bidder was developed based on the Neumann-Morgenstern utility function, which takes into account the influence of relevant factors, which allows you to adjust the bidder's behavior tactics for each bidding step and maximize the utility of the offer in terms of the principles of behavioral economics.
656

Verfahren des maschinellen Lernens zur Entscheidungsunterstützung

Bequé, Artem 21 September 2018 (has links)
Erfolgreiche Unternehmen denken intensiv über den eigentlichen Nutzen ihres Unternehmens für Kunden nach. Diese versuchen, ihrer Konkurrenz voraus zu sein, und zwar durch gute Ideen, Innovationen und Kreativität. Dabei wird Erfolg anhand von Metriken gemessen, wie z.B. der Anzahl der loyalen Kunden oder der Anzahl der Käufer. Gegeben, dass der Wettbewerb durch die Globalisierung, Deregulierung und technologische Innovation in den letzten Jahren angewachsen ist, spielen die richtigen Entscheidungen für den Erfolg gerade im operativen Geschäft der sämtlichen Bereiche des Unternehmens eine zentrale Rolle. Vor diesem Hintergrund entstammen die in der vorliegenden Arbeit zur Evaluation der Methoden des maschinellen Lernens untersuchten Entscheidungsprobleme vornehmlich der Entscheidungsunterstützung. Hierzu gehören Klassifikationsprobleme wie die Kreditwürdigkeitsprüfung im Bereich Credit Scoring und die Effizienz der Marketing Campaigns im Bereich Direktmarketing. In diesem Kontext ergaben sich Fragestellungen für die korrelativen Modelle, nämlich die Untersuchung der Eignung der Verfahren des maschinellen Lernens für den Bereich des Credit Scoring, die Kalibrierung der Wahrscheinlichkeiten, welche mithilfe von Verfahren des maschinellen Lernens erzeugt werden sowie die Konzeption und Umsetzung einer Synergie-Heuristik zwischen den Methoden der klassischen Statistik und Verfahren des maschinellen Lernens. Desweiteren wurden kausale Modelle für den Bereich Direktmarketing (sog. Uplift-Effekte) angesprochen. Diese Themen wurden im Rahmen von breit angelegten empirischen Studien bearbeitet. Zusammenfassend ergibt sich, dass der Einsatz der untersuchten Verfahren beim derzeitigen Stand der Forschung zur Lösung praxisrelevanter Entscheidungsprobleme sowie spezifischer Fragestellungen, welche aus den besonderen Anforderungen der betrachteten Anwendungen abgeleitet wurden, einen wesentlichen Beitrag leistet. / Nowadays right decisions, being it strategic or operative, are important for every company, since these contribute directly to an overall success. This success can be measured based on quantitative metrics, for example, by the number of loyal customers or the number of incremental purchases. These decisions are typically made based on the historical data that relates to all functions of the company in general and to customers in particular. Thus, companies seek to analyze this data and apply obtained knowlegde in decision making. Classification problems represent an example of such decisions. Classification problems are best solved, when techniques of classical statistics and these of machine learning are applied, since both of them are able to analyze huge amount of data, to detect dependencies of the data patterns, and to produce probability, which represents the basis for the decision making. I apply these techniques and examine their suitability based on correlative models for decision making in credit scoring and further extend the work by causal predictive models for direct marketing. In detail, I analyze the suitability of techniques of machine learning for credit scoring alongside multiple dimensions, I examine the ability to produce calibrated probabilities and apply techniques to improve the probability estimations. I further develop and propose a synergy heuristic between the methods of classical statistics and techniques of machine learning to improve the prediction quality of the former, and finally apply conversion models to turn machine learning techqiques to account for causal relationship between marketing campaigns and customer behavior in direct marketing. The work has shown that the techniques of machine learning represent a suitable alternative to the methods of classical statistics for decision making and should be considered not only in research but also should find their practical application in real-world practices.
657

Is loss avoidance differentially rewarding in adolescents versus adults?: Differences in ventral striatum and anterior insula activation during the anticipation of potential monetary losses

Bretzke, Maria, Vetter, Nora C., Kohls, Gregor, Wahl, Hannes, Roessner, Veit, Plichta, Michael M., Buse, Judith 28 March 2023 (has links)
Avoiding loss is a crucial, adaptive guide to human behavior. While previous developmental research has primarily focused on gaining rewards, less attention has been paid to loss processing and its avoidance. In daily life, it is often unknown how likely an action will result in a loss, making the role of uncertainty in loss processing particularly important. By using functional magnetic resonance imaging, we investigated the influence of varying outcome probabilities (12%, 34%, and 67%) on brain regions implicated in loss processing (ventral striatum (VS), anterior insula (AI)) by comparing 28 adolescents (10–18 years) and 24 adults (22–32 years) during the anticipation of potential monetary loss. Overall, results revealed slower RTs in adolescents compared to adults with both groups being faster in the experimental (monetary condition) vs. control trials (verbal condition). Fastest RTs were observed for the 67% outcome probability in both age groups. An age group × outcome probability interaction effect revealed the greatest differences between the groups for the 12% vs. the 67% outcome probability. Neurally, both age groups demonstrated a higher percent signal change in the VS and AI during the anticipation of potential monetary loss versus the verbal condition. However, adults demonstrated an even greater activation of VS and AI than adolescents during the anticipation of potential monetary loss, but not during the verbal condition. This may indicate that adolescents differ from adults regarding their experience of avoiding losing monetary rewards.
658

Fault diagnosis of lithium ion battery using multiple model adaptive estimation

Sidhu, Amardeep Singh 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Lithium ion (Li-ion) batteries have become integral parts of our lives; they are widely used in applications like handheld consumer products, automotive systems, and power tools among others. To extract maximum output from a Li-ion battery under optimal conditions it is imperative to have access to the state of the battery under every operating condition. Faults occurring in the battery when left unchecked can lead to irreversible, and under extreme conditions, catastrophic damage. In this thesis, an adaptive fault diagnosis technique is developed for Li-ion batteries. For the purpose of fault diagnosis the battery is modeled by using lumped electrical elements under the equivalent circuit paradigm. The model takes into account much of the electro-chemical phenomenon while keeping the computational effort at the minimum. The diagnosis process consists of multiple models representing the various conditions of the battery. A bank of observers is used to estimate the output of each model; the estimated output is compared with the measurement for generating residual signals. These residuals are then used in the multiple model adaptive estimation (MMAE) technique for generating probabilities and for detecting the signature faults. The effectiveness of the fault detection and identification process is also dependent on the model uncertainties caused by the battery modeling process. The diagnosis performance is compared for both the linear and nonlinear battery models. The non-linear battery model better captures the actual system dynamics and results in considerable improvement and hence robust battery fault diagnosis in real time. Furthermore, it is shown that the non-linear battery model enables precise battery condition monitoring in different degrees of over-discharge.
659

Satisticing solutions for multiobjective stochastic linear programming problems

Adeyefa, Segun Adeyemi 06 1900 (has links)
Multiobjective Stochastic Linear Programming is a relevant topic. As a matter of fact, many real life problems ranging from portfolio selection to water resource management may be cast into this framework. There are severe limitations in objectivity in this field due to the simultaneous presence of randomness and conflicting goals. In such a turbulent environment, the mainstay of rational choice does not hold and it is virtually impossible to provide a truly scientific foundation for an optimal decision. In this thesis, we resort to the bounded rationality and chance-constrained principles to define satisficing solutions for Multiobjective Stochastic Linear Programming problems. These solutions are then characterized for the cases of normal, exponential, chi-squared and gamma distributions. Ways for singling out such solutions are discussed and numerical examples provided for the sake of illustration. Extension to the case of fuzzy random coefficients is also carried out. / Decision Sciences
660

Grade 12 learner's problem-solving skills in probability

Awuah, Francis Kwadwo 06 1900 (has links)
This study investigated the problem-solving skills of Grade 12 learners in probability. A total of 490 Grade 12 learners from seven schools, categorised under four quintiles (socioeconomic factors) were purposefully selected for the study. The mixed method research methodology was employed in the study. Bloom’s taxonomy and the aspects of probability enshrined in the Mathematics Curriculum and Assessment Policy Statement (CAPS) document of 2011 were used as a framework of analysis. A cognitive test developed by the researcher was used as an instrument to collect data from learners. The instrument used for data collection passed the test of validity and reliability. Quantitative data collected was analysed using descriptive and inferential statistics and qualitative data collected from learners was analysed by performing a content analysis of learners’ scripts. The study found that the learners in this study were more proficient in the use of Venn diagrams as an aid in solving probability problems than in using tree diagrams and contingency tables as aids in solving these problems. Results of the study also showed that with the exception of Bloom's taxonomy synthesis level, learners in Quintile 4 (fee-paying schools) had statistically significant (P-value < 0.05) higher achievement scores than learners in Quintiles 1 to 3, (i.e. non-fee-paying schools) at the levels of knowledge, comprehension, application, analysis and evaluation of Bloom’s taxonomy. Contrary to expectations, it was revealed that the achievement of the learners in probability in this study decreased from Quintile 1 to Quintile 3 in all but the synthesis level of Bloom's taxonomy. Based on these findings, the study argued that the quintile ranking of schools in South Africa may be a useful but not a perfect means of categorisation to help improve learner achievement. Furthermore, learners in the study demonstrated three main error types, namely computational error, procedural error and structural error. Based on the findings of the study it was recommended that regular content-specific professional development be given to all teachers, especially on newly introduced topics, to enhance effective teaching and learning. / Mathematics Education / Ph. D. (Mathematics, Science and Technology Education)

Page generated in 0.0487 seconds