• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 73
  • 11
  • 9
  • 6
  • 5
  • 5
  • 4
  • 3
  • 2
  • 2
  • 2
  • 1
  • Tagged with
  • 141
  • 141
  • 34
  • 21
  • 21
  • 20
  • 18
  • 16
  • 14
  • 14
  • 13
  • 13
  • 12
  • 12
  • 11
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
21

Immutability: An Empirical Study in Scala / Oföränderlighet: en empirisk studie i Scala

Axelsson, Ludvig January 2017 (has links)
Utilizing immutability is considered to have many desired benefits when it comes to software development and reasoning about programs. It is also one of the core principles of functional programming, and many programming languages have support for specifying immutability. Developers can by specifying immutability write code that, for example, prevent program state from being unintentionally mutated. The Scala programming language is a functional and object-oriented language where developers can specify immutability with reassignable and non-reassignable variables. The type system in Scala has no built-in support for developers to express the fact that a type is immutable, immutability is instead by convention and considered best practice. However, knowledge about the immutability usage and how prevalent it is in real-world Scala code are until this point non-existent.            This project presents an immutability analysis and evaluation of six small-to-large open source projects written in Scala providing empirical data on immutability usage. The analysis investigates the immutability property of templates, where a template refers to one of Scala's different class types, on three distinct properties: shallow, conditionally deep and deep immutability, where deep is the strongest immutability property. The analysis works as a plug-in for the Scala compiler that statically analyzes the source code of projects. We report immutability statistics for each evaluated project, including three widely used projects, Scala's standard library, Akka's actor framework and ScalaTest. Explanations to why stronger immutability properties do not hold are also provided. The analysis show that the majority of templates for each project satisfied an immutability property and were not classified as mutable. Because each analyzed project had templates that were assumed to be mutable, as they were unreachable by our analysis, there could potentially be more templates that satisfy an immutability property. Inheritance is shown to be an important factor when it comes to a template's immutability and mutability was found to be lower for the template types case class and singleton object. This can be seen as intended by the designers of Scala, indicating that these type of class abstractions help programmers utilize immutability. Our results show that immutability is frequently used in Scala and the high degree of immutability usage could be due to the functional nature of the language. / Att använda immuterbar (oföränderlig) data anses ha många önskvärda fördelar när det kommer till utveckling av program och att kunna resonera om dess funktionalitet. Immuterbar data är också en viktig princip inom funktionell programmering och många språk har idag stöd för att ange immuterbarhet. Utvecklare kan i kod ange ifall data ska vara immuterbar för att till exempel förhindra ett programtillstånd från att oavsiktligt förändras. Programmeringsspråket Scala är ett funktionellt och objektorienterat språk där utvecklare kan ange immuterbarhet med två typer av variabler, en som är tilldelningsbar och en som är icke-tilldelningsbar. Typsystemet i Scala har inget inbyggt stöd för utvecklare att uttrycka det faktum att en typ är immuterbar, att använda immuterbarhet är i stället konvention och anses vara den bästa metoden. Men uppgifter om hur immuterbarhet egentligen används i riktiga Scala projekt har fram tills nu inte varit tillgängligt. Detta projekt presenterar en immuterbarhetsanalys och en utvärdering av sex små till stora projekt med öppen källkod skrivna i programmeringsspråket Scala. Analysen undersöker immuterbarhetsegenskaper hos Scalas olika typer av klasser med avseende på tre olika egenskaper: ytlig, villkorligt djup och djup immuterbar, där djup är den starkaste immuterbarhetsegenskapen. Analysen fungerar som ett tillägg för Scalas kompilator och utfärdar en statisk analys av källkoden för ett projekt. Statistik om immuterbarhet för varje projekt redovisas och utvärderas, bland annat tre välkända och populära kodbaser, Scalas standard bibliotek, Akka’s actor ramverk och ScalaTest. Förklaringar till varför klasser inte uppfyller en immuterbarhetsegenskap visas också. Analysen visar att majoriteten av alla klasser i projekten har en immuterbarhetsegenskap och var inte klassificerade som muterbara. Eftersom varje projekt hade klasser som antogs vara muterbara för att dessa inte var nåbara för våran analys så kan det potentiellt finnas fler klasser som har en immuterbarhetsegenskap. Vad en klass ärver visar sig vara en viktig faktor när det kommer till vilken typ av immuterbarhetsegenskap den har. Muterbarhet visade sig vara lägre för klasser som är av typen case class and singleton object vilket kan anses vara avsett av Scalas skapare, då dessa klass abstraktioner hjälper programmerare att använda immuterbarhet. Resultaten visar att immuterbarhet används flitigt i Scala och den höga användningsgraden kan vara på grund av att det är ett funktionellt språk.
22

An empirical evaluation of information theory-based software metrics in comparison to counting-based metrics: case-study approach

Govindarajan, Rajiv 08 May 2004 (has links)
The field of software engineering embraces measurement, analysis and modeling of software. Software metrics are often based on counting, whereas this thesis adopts information theory. The goal of this research is to show that information theory-based metrics proposed by Allen can be useful for software development projects compared to counting-based metrics. Briand, et.al. have defined five families of measures based on counting the elements of a graph. This research considers a hypergraph system. Parallel Mathematical Library Project (PMLP) was used as the case study. Abstract semantic graphs were generated for the C++ source files of PMLP in the form of nodes * hyperedges tables, which are measured for counting and information theory-based measures. Analysis showed that information theory-based metrics provide fine-grained distinctions among the modules, compared to the counting-based metrics. The case study measurements conformed to the properties proposed by Briand et.al. as well.
23

Unfolding the Rationale for Code Commits

Alsafwan, Khadijah Ahmad 06 June 2018 (has links)
One of the main reasons why developers investigate code history is to search for the rationale for code commits. Existing work found that developers report that rationale is one of the most important aspects to understand code changes and that it can be quite difficult to find. While this finding strongly points out the fact that understanding the rationale for code commits is a serious problem for software engineers, no current research efforts have pursued understanding in detail what specifically developers are searching for when they search for rationale. In other words, while the rationale for code commits is informally defined as, "Why was this code implemented this way?" this question could refer to aspects of the code as disparate as, "Why was it necessary to implement this code?"; "Why is this the way in which it was implemented?"; or "Why was the code implemented at that moment?" Our goal with this study is to improve our understanding of what software developers mean when they talk about the rationale for code commits, i.e., how they "unfold" rationale. We additionally study which components of rationale developers find important, which ones they normally need to find, which ones they consider specifically difficult to find, and which ones they normally record in their own code commits. This new, detailed understanding of the components of the rationale for code commits may serve as inspiration for novel techniques to support developers in seeking and accurately recording rationale. / MS / Modern software systems evolution is based on the contribution of a large number of developers. In version control systems, developers introduce packaged changes called code commits for various reasons. In this process of modifying the code, the software developers make some decisions. These decisions need to be understood by other software developers. The question “why the code is this way?” is used by software developers to ask for the rationale behind code changes. The question could refer to aspects of the code as disparate as, “Why was it necessary to implement this code?”; “Why is this the way in which it was implemented?”; or “Why was the code implemented at that moment?” Our goal with this study is to improve our understanding of what software developers mean when they talk about the rationale for code commits, i.e., how they “unfold” rationale. We additionally study which components of rationale developers nd important, which ones they normally need to nd, which ones they consider specically dicult to nd, and which ones they normally record in their own code commits. This new, detailed understanding of the components of the rationale for code commits will allow researchers and tools builders to understand what the developers mean when they mention rationale. Therefore, assisting the development of tools and techniques to support the developers when seeking and recording rationale.
24

It's Not Black and White: An Empirical Study of the 2015-2016 U.S. College Protests

Kelleher, Kaitlyn Anne 01 January 2017 (has links)
Beginning in October 2015, student protests erupted at many U.S. colleges and universities. This wave of demonstrations prompted an ongoing national debate over the following question: what caused this activism? Leveraging existing theoretical explanations, this paper attempts to answer this question through an empirical study of the 73 most prominent college protests from October 2015 to April 2016. I use an original data set with information collected from U.S. News and World Report to determine what factors at these 73 schools were most predictive of the protests. My findings strongly suggest that the probability of a protest increases at larger, more selective institutions. I also find evidence against the dominant argument that the marginalization of minority students exclusively caused this activism. Using my empirical results, this paper presents a new theoretical explanation for the 2015-2016 protests. I argue that racial tensions sparked the first demonstration. However, as the protests spread to other campuses, they were driven less by racial grievances and more by a pervasive culture of political correctness. This paper concludes by applying this new theoretical framework to the budding wave of 2017 protests.
25

Empirical studies of financial and labor economics

Li, Mengmeng 12 August 2016 (has links)
This dissertation consists of three essays in financial and labor economics. It provides empirical evidence for testing the efficient market hypothesis in some financial markets and for analyzing the trends of power couples’ concentration in large metropolitan areas. The first chapter investigates the Bitcoin market’s efficiency by examining the correlation between social media information and Bitcoin future returns. First, I extract Twitter sentiment information from the text analysis of more than 130,000 Bitcoin-related tweets. Granger causality tests confirm that market sentiment information affects Bitcoin returns in the short run. Moreover, I find that time series models that incorporate sentiment information better forecast Bitcoin future prices. Based on the predicted prices, I also implement an investment strategy that yields a sizeable return for investors. The second chapter examines episodes of exuberance and collapse in the Chinese stock market and the second-board market using a series of extended right-tailed augmented Dickey-Fuller tests. The empirical results suggest that multiple “bubbles” occurred in the Chinese stock market, although insufficient evidence is found to claim the same for the second-board market. The third chapter analyzes the trends of power couples’ concentration in large metropolitan areas of the United States between 1940 and 2010. The urbanization of college-educated couples between 1940 and 1990 was primarily due to the growth of dual-career households and the resulting severity of the co-location problem (Costa and Kahn, 2000). However, the concentration of college-educated couples in large metropolitan areas stopped increasing between 1990 and 2010. According to the results of a multinomial logit model and a triple difference-in-difference model, this is because the co-location effect faded away after 1990.
26

CONTEXT-AWARE DEBUGGING FOR CONCURRENT PROGRAMS

Chu, Justin 01 January 2017 (has links)
Concurrency faults are difficult to reproduce and localize because they usually occur under specific inputs and thread interleavings. Most existing fault localization techniques focus on sequential programs but fail to identify faulty memory access patterns across threads, which are usually the root causes of concurrency faults. Moreover, existing techniques for sequential programs cannot be adapted to identify faulty paths in concurrent programs. While concurrency fault localization techniques have been proposed to analyze passing and failing executions obtained from running a set of test cases to identify faulty access patterns, they primarily focus on using statistical analysis. We present a novel approach to fault localization using feature selection techniques from machine learning. Our insight is that the concurrency access patterns obtained from a large volume of coverage data generally constitute high dimensional data sets, yet existing statistical analysis techniques for fault localization are usually applied to low dimensional data sets. Each additional failing or passing run can provide more diverse information, which can help localize faulty concurrency access patterns in code. The patterns with maximum feature diversity information can point to the most suspicious pattern. We then apply data mining technique and identify the interleaving patterns that are occurred most frequently and provide the possible faulty paths. We also evaluate the effectiveness of fault localization using test suites generated from different test adequacy criteria. We have evaluated Cadeco on 10 real-world multi-threaded Java applications. Results indicate that Cadeco outperforms state-of-the-art approaches for localizing concurrency faults.
27

The Australian Freedom of Information Legislation and its applicability to Sri Lanka: an empirical study

Weereratne, Anura R, n/a January 2001 (has links)
The Dissertation sets out the results of an evaluation of certain aspects of the Commonwealth of Australia's Freedom of Information Legislation and proposals to introduce a Freedom of Information Law in Sri Lanka. The major purpose of the study was: (i) to evaluate whether the Commonwealth FOI Act has achieved the objects of Parliament - whether members of the public could have a free access to government information subject to important exemptions. (ii) whether a FOI regime should be introduced to Sri Lanka In conducting my research, I devoted three chapters to FOI in Australia including the development of the legislation. I analysed key components of the legislation and researched to what extent the FOI Act has achieved its objects. I devoted two chapters towards the concept of transparency of government in Sri Lanka, the attitude of the Courts towards the concept of the right to information and whether Sri Lanka needs a Freedom of Information Act. In the last two chapters, I have devoted a chapter each to the concept of translocation of laws and about an ideal FOI Act for Sri Lanka, which is an adaptation of the Australian Act. The individual components of the methodology incorporated: (i) a literature survey of the Commonwealth FOI Act, Freedom of Information in the United Nations and in the USA; and Sweden, Canada and New Zealand; (ii) a literature survey concerning the transparency of government in Sri Lanka (ii) interviews with a cross section Commonwealth FOI administrators and key politicians, lawyers and a cross section of members of the press and public in Sri Lanka; and (iv) research of the Australian FOI legislation The empirical data present an analysis of key features of the Commonwealth FOI Act with particular attention to exemption clauses. I have recommended some amendments to the FOI Act in view of the Commonwealth Government's policy of outsourcing some of its activities and the creation of a position of FOI Commissioner. Finally my research indicates that Sri Lanka needs Freedom of Information legislation to meet the challenges facing a developing country that is endeavoring to reach 'newly developed status' early in the new millennium. Furthermore, international lenders and donors are now requiring that developing countries like Sri Lanka seeking aid, should show more transparency in its activities. I have drafted a Freedom of Jiformation Bill for Sri Lanka. I have based the draft on the Australian law adapted to suit the local conditions in Sri Lanka, which is in Appendix "G".
28

Epiphanies of finitude: a phenomenological study of existential reading

Sopcak, Paul 06 1900 (has links)
A prominent hypothesis in literary studies is that readers, especially those that are fully immersed, engage empathically with fictional characters. This dissertation provides a critique of the Cartesian assumptions embedded in contemporary (cognitive scientific) models of empathy and then goes on to provide an alternative account of empathy based on especially Husserl’s and Heidegger’s phenomenology. According to this alternative, empathy does not establish but rather discloses in reflection an already present intersubjectivity from which it is derivative. It is also held that readers who are fully empathically engaged in a literary text lose self-awareness. I provide a critique of this view and present a Husserlian model according to which full engagement with the other and continuation of a certain kind of self-awareness occur simultaneously. This phenomenological alternative is based on the notion that an experiential self-givenness or “mineness” accompanies all my experiences and is prior to any objectifying forms of self-awareness. I then critique Cartesian models of (self-)reflection and self-modification in literary reading and with the help of Heidegger suggest a phenomenological model within which the distinction between modification of beliefs and the modification that is inherent in experiencing becomes understandable as contingent on the form of ontological interrogation that Merleau-Ponty terms “radical reflection”. Finally, I present a series of empirical studies investigating whether the preceding theoretical distinctions are borne out in the experiences of actual readers of literary texts concerned with human finitude. Phenomenological methods, (Kuiken, Schopflocher, and Wild; Kuiken and Miall, “Numerically Aided Phenomenology”) were employed to 1) identify several distinct types of reading experience, 2) spell out how one of those types instantiates ‘existential reading’ as conceived here, and 3) provide convergent and discriminant validation of this type of reading experience. Of particular interest was whether a form of existential reading can be understood as an event during which readers engage the text through a form of empathic engagement that is grounded in an a priori intersubjectivity, that retains an experiential self-awareness or “mineness” simultaneously with empathic engagement, and that supports a non-Cartesian form of “radical reflection” that opens onto an ontological consideration of finitude.
29

A Manifestation of Model-Code Duality: Facilitating the Representation of State Machines in the Umple Model-Oriented Programming Language

Badreldin, Omar 18 April 2012 (has links)
This thesis presents research to build and evaluate embedding of a textual form of state machines into high-level programming languages. The work entailed adding state machine syntax and code generation to the Umple model-oriented programming technology. The added concepts include states, transitions, actions, and composite states as found in the Unified Modeling Language (UML). This approach allows software developers to take advantage of the modeling abstractions in their textual environments, without sacrificing the value added of visual modeling. Our efforts in developing state machines in Umple followed a test-driven approach to ensure high quality and usability of the technology. We have also developed a syntax-directed editor for Umple, similar to those available to other high-level programming languages. We conducted a grounded theory study of Umple users and used the findings iteratively to guide our experimental development. Finally, we conducted a controlled experiment to evaluate the effectiveness of our approach. By enhancing the code to be almost as expressive as the model, we further support model-code duality; the notion that both model and code are two faces for the same coin. Systems can be and should be equally-well specified textually and diagrammatically. Such duality will benefit both modelers and coders alike. Our work suggests that code enhanced with state machine modeling abstractions is semantically equivalent to visual state machine models. The flow of the thesis is as follows; the research hypothesis and questions are presented in “Chapter 1: Introduction”. The background is explored in “Chapter 2: Background”. “Chapter 3: Syntax and semantics of simple state machines” and “Chapter 4: Syntax and semantics of composite state machines” investigate simple and composite state machines in Umple, respectively. “Chapter 5: Implementation of composite state machines” presents the approach we adopt for the implementation of composite state machines that avoids explosion of the amount of generated code. From this point on, the thesis presents empirical work. A grounded theory study is presented in “Chapter 6: A Grounded theory study of Umple”, followed by a controlled experiment in “Chapter 7: Experimentation”. These two chapters constitute our validation and evaluation of Umple research. Related and future work is presented in “Chapter 8: Related work”.
30

Object-oriented software development effort prediction using design patterns from object interaction analysis

Adekile, Olusegun 15 May 2009 (has links)
Software project management is arguably the most important activity in modern software development projects. In the absence of realistic and objective management, the software development process cannot be managed in an effective way. Software development effort estimation is one of the most challenging and researched problems in project management. With the advent of object-oriented development, there have been studies to transpose some of the existing effort estimation methodologies to the new development paradigm. However, there is not in existence a holistic approach to estimation that allows for the refinement of an initial estimate produced in the requirements gathering phase through to the design phase. A SysML point methodology is proposed that is based on a common, structured and comprehensive modeling language (OMG SysML) that factors in the models that correspond to the primary phases of object-oriented development into producing an effort estimate. This dissertation presents a Function Point-like approach, named Pattern Point, which was conceived to estimate the size of object-oriented products using the design patterns found in object interaction modeling from the late OO analysis phase. In particular, two measures are proposed (PP1 and PP2) that are theoretically validated showing that they satisfy wellknown properties necessary for size measures. An initial empirical validation is performed that is meant to assess the usefulness and effectiveness of the proposed measures in predicting the development effort of object-oriented systems. Moreover, a comparative analysis is carried out; taking into account several other size measures. The experimental results show that the Pattern Point measure can be effectively used during the OOA phase to predict the effort values with a high degree of confidence. The PP2 metric yielded the best results with an aggregate PRED (0.25) = 0.874.

Page generated in 0.1329 seconds