• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 615
  • 157
  • 86
  • 74
  • 54
  • 47
  • 33
  • 17
  • 16
  • 14
  • 13
  • 12
  • 9
  • 8
  • 8
  • Tagged with
  • 1425
  • 210
  • 188
  • 188
  • 181
  • 178
  • 123
  • 116
  • 102
  • 102
  • 98
  • 85
  • 80
  • 78
  • 78
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
81

Sharing Knowledge in Virtual Communities: Factors Affecting A Member's Intention to Share

Zhao, Li 09 1900 (has links)
<P> This dissertation aims to advance empirical research in the realm of knowledge sharing in virtual communities and to help practitioners better understand the factors that inhibit (cost) or motivate (benefit) such behaviour. The impact of some costs and benefits (factors derived from social exchange theory) may be contingent upon certain social contexts or conditions (factors derived from social capital theory). To this end, two research models were developed (i.e., a main effects model and an interaction model) that integrate these two theories together. New constructs specific to the virtual community context were also incorporated. To test these models, an online survey was administered to 968 members of a large IT professional virtual community comprising millions of registered users. </p> <p> Findings from a structural equation modeling analysis of this data set suggest that specific benefits and social capital factors have direct effects on an individual's intention to share knowledge, and more importantly, the impacts of some benefits are contingent upon certain social capital factors. Specifically, the impact of online score rewards on an individual's intention to share knowledge with others in the virtual community is contingent upon that person's trust in the people who are seeking knowledge from that individual. Additionally, the impact of reciprocity on an individual's intention to share knowledge is moderated by pro-sharing norms in the virtual community. </p> <p> A major contribution of this dissertation is the provision of new theoretical insights that help explain how certain benefits and social capital factors affect knowledge sharing activity in virtual communities. It is hoped that these insights will help builders and managers of knowledge-based virtual communities better promote online knowledge sharing behaviours and improve the sustainability of such communities in the future. </p> / Thesis / Doctor of Philosophy (PhD)
82

Fundamentals of Software Patent Protection at a University

Everett, Christopher E 10 May 2003 (has links)
Software protection by patents is an emerging field and thus is not completely understood by software developers, especially software developers in a university setting. University inventors have to balance their publication productivity and the desire of their university to license inventions that could be profitable. This balance stems from the one-year bar on filing a U.S. patent application after public disclosure such as publications of the invention. The research provides evidence supporting the hypothesis that a university inventor can improve the protection of his or her software patent by applying certain information about patent prosecution practices and the relevant prior art. Software inventors need to be concerned about fulfilling the requirements of patent laws. Some of the methods for fulfilling these requirements include using diagrams in patent applications such as functional block diagrams, flowchart diagrams, and state diagrams and ensuring that the patent application is understandable by non-technical people. The knowledge of prior art ensures that the inventor is not "reinventing the wheel," not infringing on a patent, and understands the current state of the art. The knowledge of patent laws, diagrams, readability, and prior art enables a software inventor to take control of the protection of his or her invention to ensure that the application of this information leads to improvements during the application process.
83

Empirical Likelihood For Change Point Detection And Estimation In Time Series Models

Piyadi Gamage, Ramadha D. 02 August 2017 (has links)
No description available.
84

Evaluating the Potential for Estimating Age of Even-aged Loblolly Pine Stands Using Active and Passive Remote Sensing Data

Quirino, Valquiria Ferraz 11 December 2014 (has links)
Data from an airborne laser scanner, a dual-band interferometric synthetic aperture radar (DBInSAR), and Landsat were evaluated for estimating ages of even-aged loblolly pine stands in Appomattox-Buckingham State Forest, Virginia, U.S.A. The DBInSAR data were acquired using the GeoSAR sensor in summer, 2008 in both the P- and X-bands. The LiDAR data were acquired in the same summer using a small-footprint laser scanner. Loblolly pine stand ages were assigned using the establishment year of loblolly pine stands provided by the Virginia Department of Forestry. Random circular plots were established in stands which varied in age from 5 to 71 years and in site index from 21 to 29 meters (base age 25 years). LiDAR- and GeoSAR-derived independent variables were calculated. The final selected LiDAR model used common logarithm of age as the dependent variable and the 99.5th percentile of height above ground as the independent variable (R2adj = 90.2%, RMSE = 4.4 years, n=45). The final selected GeoSAR models used the reciprocal of age as the dependent variable and had three independent variables: the sum of the X-band magnitude, the 25th percentile of X/P-band magnitudes, and the 90th percentile of the X-band height above ground (R2adj = 84.1%, RMSE = 7.9 years, n=46). The Vegetation Change Tracker (VCT) algorithm was run using a digital elevation layer, a land cover map, and a series of Landsat (5 and 7) images. A comparison was made between the loblolly pine stand ages obtained using the three methods and the reference data. The results show that: (1) although most of the time VCT and reference data ages were different, the differences were normally small, (2) all three remote sensing methods produced reliable age estimates, and (3) the Landsat-VCT algorithm produced the best estimates for younger stands (5 to 22 years old, RMSEVCT=2.2 years, RMSEGeoSAR=2.6 years, RMSELiDAR=2.6 years, n=35) and the model that used LiDAR-derived variables was better for older stands. Remote sensing can be used to estimate loblolly pine stand age, though prior knowledge of site index is required for active sensors that rely primarily on the relationship between age and height. / Ph. D.
85

Xeditor: Inferring and Applying XML Consistency Rules

Wen, Chengyuan 12 1900 (has links)
XML files are frequently used by developers when building Web applications or Java EE applications. However, maintaining XML files is challenging and time-consuming because the correct usage of XML entities is always domain-specific and rarely well documented. Also, the existing compilers and program analysis tools seldom examine XML files. In this thesis, we developed a novel approach to XML file debugging called Xeditor where we extract XML consistency rules from open-source projects and use these rules to detect XML bugs. There are two phases in Xeditor: rule inference and application. To infer rules, Xeditor mines XML-based deployment descriptors in open-source projects, extracting XML entity pairs that frequently co-exist in the same files and refer to the same string literals. Xeditor then applies association rule mining to the extracted pairs. For rule application, given a program commit, Xeditor checks whether any updated XML file violates the inferred rules; if so, Xeditor reports the violation and suggests an edit for correction?. Our evaluation shows that Xeditor inferred rules with high precision (83%). For injected XML bugs, Xeditor detected rule violations and suggested changes with 74.6% precision, 50% recall. More importantly, Xeditor identified 31 really erroneous XML updates in version history, 17 of which updates were fixed by developers in later program commits. This observation implies that by using Xeditor, developers would have avoided introducing errors when writing XML files. Finally, we compared Xeditor with a baseline approach that suggests changes based on frequently co-changed entities, and found Xeditor to outperform the baseline for both rule inference and rule application. / XML files are frequently used in Java programming and when building Web application implementation. However, it is a challenge to maintain XML files since these files should follow various domain-specific rules and the existing program analysis tools seldom check XML files. In this thesis, we introduce a new approach to XML file debugging called Xeditor that extracts XML consistency rules from open-source projects and uses these rules to detect XML bugs. To extract the rules, Xeditor first looks at working XML files and finds all the pairs of entities A and B, which entities coexist in one file and have the same value on at least one occasion. Then Xeditor will check when A occurs, what is the probability that B also occurs. If the probability is high enough, Xeditor infers a rule that A is associated with B. To apply the rule, Xeditor checks XML files with errors. If a file violates the rules that were previously inferred, Xeditor will report the violation and suggest a change. Our evaluation shows that Xeditor inferred the correct rules with high precision 83%. More importantly, Xeditor identified issues in previous versions of XML files, and many of those issues were fixed by developers in later versions. Therefore, Xeditor is able to help find and fix errors when developers write their XML files.
86

Empirical Analysis of User Passwords across Online Services

Wang, Chun 05 June 2018 (has links)
Leaked passwords from data breaches can pose a serious threat if users reuse or slightly modify the passwords for other services. With more and more online services getting breached today, there is still a lack of large-scale quantitative understanding of the risks of password reuse and modification. In this project, we perform the first large-scale empirical analysis of password reuse and modification patterns using a ground-truth dataset of 28.8 million users and their 61.5 million passwords in 107 services over 8 years. We find that password reuse and modification is a very common behavior (observed on 52% of the users). More surprisingly, sensitive online services such as shopping websites and email services received the most reused and modified passwords. We also observe that users would still reuse the already-leaked passwords for other online services for years after the initial data breach. Finally, to quantify the security risks, we develop a new training-based guessing algorithm. Extensive evaluations show that more than 16 million password pairs (30% of the modified passwords and all the reused passwords) can be cracked within just 10 guesses. We argue that more proactive mechanisms are needed to protect user accounts after major data breaches. / Master of Science
87

How Do Java Developers Reuse StackOverflow Answers in Their GitHub Projects?

Chen, Juntong 09 September 2022 (has links)
StackOverflow (SO) is a widely used question-and-answer (QandA) website for software developers and computer scientists. GitHub is a code hosting platform for collaboration and version control. Popular software libraries are open-source and published in repositories on GitHub. Preliminary observation shows developers cite SO questions in their GitHub repository. This observation inspired us to explore the relationship between SO posts and GitHub repositories; to help software developers better understand the characterization of SO answers that are reused by GitHub projects. For this study, we conducted an empirical study to investigate the SO answers reused by Java code from public GitHub projects. We used a hybrid approach to ensure precise results: code clone detection, keyword-based search, and manual inspection. This approach helped us identify the leveraged answers from developers. Based on the identified answers, we further investigated the topics of the discussion threads; answer characteristics (e.g., scores, ages, code lengths, and text lengths) and developers' reuse practices. We observed both reused and unused answers. Compared with unused answers, We found that the reused answers mostly have higher scores, longer code, and longer plain text explanations. Most reused answers were related to implementing specific coding tasks. In one of our observations, 9% (40/430) of scenarios, developers entirely copied code from one or multiple answers of an SO discussion thread. Furthermore, we observed that in the other 91% (390/430) of scenarios, developers only partially reused code or created brand new code from scratch. We investigated 130 SO discussion threads referred to by Java developers in 356 GitHub projects. We then arranged those into five different categories. Our findings can help the SO community have a better distribution of programming knowledge and skills, as well as inspire future research related to SO and GitHub. / Master of Science / StackOverflow (SO) is a widely used question-and-answer (QandA) website for software developers and computer scientists. GitHub is a code hosting platform for collaboration and version control. Popular software libraries are open-source and published in repositories on GitHub. Preliminary observation shows developers cite SO questions in their GitHub repository. This observation inspired us to explore the relationship between SO posts and GitHub repositories; to help software developers better understand the characterization of SO answers that are reused by GitHub projects. Our objectives are to guide SO answerers to help developers better; help tool builders understand how SO answers shape software products. Thus, we conducted an empirical study to investigate the SO answers reused by Java code from public GitHub projects. We used a hybrid approach to refine our dataset and to ensure precise results. Our hybrid approach includes three steps. The first step is code clone detection. We compared two code snippets with a code clone detection tool to find the similarity. The second step is a keyword-based search. We created multiple keywords to search within GitHub code to find the referenced answers missed by step one. Lastly, we manually inspected the outputs of both step one and two to ensure zero false positives in our data. This approach helped us identify the leveraged answers from developers. Based on the identified answers, we further investigated the topics of the discussion threads, answer characteristics, and developers' reuse practices. We observed both reused and unused answers. Compared with unused answers, We found that the reused answers mostly have higher scores, longer code, and longer plain text explanations. Most reused answers were related to implementing specific coding tasks. In one of our observations, 9% of scenarios, developers entirely copied code from one or multiple answers of an SO discussion thread. Furthermore, we observed that in the other 91% of scenarios, developers only partially reused code or created brand new code from scratch. Our findings can help the SO community have a better distribution of programming knowledge and skills, as well as inspire future research related to SO and GitHub.
88

The impact of response styles on the stability of cross-national comparisons

Reynolds, Nina L., Diamantopoulos, A., Simintiras, A. January 2006 (has links)
No / Response style effects are a source of bias in cross-national studies, with some nationalities being more susceptible to particular response styles than others. While response styles, by their very nature, vary with the form of the stimulus involved, previous research has not investigated whether cross-national differences in response styles are stable across different forms of a stimulus (e.g., item wording, scale type, response categories). Using a quasi-experimental design, this study shows that response style differences are not stable across different stimulus formats, and that response style effects impact on substantive cross-national comparisons in an inconsistent way.
89

Inference for Cox's Regression Model via a New Version of Empirical Likelihood

Jinnah, Ali 28 November 2007 (has links)
Cox Proportional Hazard Model is one of the most popular tools used in the study of Survival Analysis. Empirical Likelihood (EL) method has been used to study the Cox Proportional Hazard Model. In recent work by Qin and Jing (2001), empirical likelihood based confidence region is constructed with the assumption that the baseline hazard function is known. However, in Cox’s regression model the baseline hazard function is unspecified. In this thesis, we re-formulate empirical likelihood for the vector of regression parameters by estimating the baseline hazard function. The EL confidence regions are obtained accordingly. In addition, Adjusted Empirical Likelihood (AEL) method is proposed. Furthermore, we conduct extensive simulation studies to evaluate the performance of the proposed empirical likelihood methods in terms of coverage probabilities by comparing with the Normal Approximation based method. The simulation studies show that all the three methods produce similar coverage probabilities.
90

Problems and Possibilities with Non-Empirical Assessment of Scientific Theories : An Analysis of the Argument Given by Richard Dawid / Problem och möjligheter med icke-empirisk bedömning av vetenskapliga teorier : En analys av Richard Dawids argument

Skott, Anton January 2020 (has links)
This essay examines the argument given by Richard Dawid (2013, 2019) for the viability of non-empirical assessment of scientific theories. Dawid's argument is supposed to show that trust in a scientific theory can be justified without any direct empirical testing of the theory. This view is fundamentally different from what will be called the classical paradigm of theory assessment. The classical paradigm holds that only empirical testing can justify belief in a theory. It is argued in this essay that Dawid's argument does not provide sufficient reasons for claiming that non-empirical assessment can be seen as a valid form of justification of scientific theories. However, it is further argued that non-empirical assessment still can play an important role when evaluating the status of a theory that cannot yet be tested empirically.

Page generated in 0.0844 seconds