Spelling suggestions: "subject:"repositories"" "subject:"depositories""
61 |
Peruvian ETD´s: Challenges and opportunities / Tesis digitales en Perú: Retos y oportunidadesHuaroto, Libio, Saravia Lopez de Castilla, Miguel 07 November 2019 (has links)
Los programas de tesis digitales en Perú se iniciaron el 2002, Cybertesis fue la primera plataforma adoptada, actualmente DSpace es el software de uso oficial. En Junio de 2013 se publicó la Ley 30035 que obliga a las entidades públicas implementar un repositorio digital de acceso abierto, así mismo, crea el Portal “ALICIA”. En Setiembre de 2016 entró en funcionamiento el Portal “RENATI”, plataforma que difunde las tesis de entidades peruanas de educación superior. Actualmente ALICIA tiene 194 repositorios y 271,000 publicaciones en acceso abierto. Por su parte RENATI tiene 170 repositorios de tesis y 218,397 documentos de acceso abierto y restringido. Las políticas institucionales y normativas de gobierno han promovido un crecimiento acelerado de los repositorios, en especial de las tesis, de igual forma, han generado nuevos servicios en la mejora de la calidad de las tesis. / The Peruvian ETD programs began in 2002, Cybertesis was the first platform adopted and currently DSpace is the official software. In June 2013, Law 30035 was published that forces public institutions to implement an open access digital repository, and creates the “ALICIA” Portal. In September 2016, Portal “RENATI” was published, it´s a platform that disseminates the Peruvian digital thesis. “ALICIA” currently has 194 repositories and 271,000 open access publications. RENATI includes 170 digital thesis repositories and 218,397 open and restricted access digital theses. Government and regulatory policies has promoted an accelerated growth of repositories, especially digital theses; in the same way, new services to improve the quality of the thesis were developed.
|
62 |
ORGANIZATIONAL MEMORY SYSTEMS AS A SOURCE OF LEARNING FOR NEW EMPLOYEES IN AN INNOVATIVE CONTEXTZadayannaya, Liudmila January 2012 (has links)
Organizational memory is said to be one of the essential factors of organizational learning, particularly in a part that is concerned with knowledge flowing from an organization to its employees. Often viewed as a system of knowledge repositories, organizational memory is argued to be important in various contexts. The purpose of this study is to explore an impact of the organizational memory in two such contexts, namely in a situation of presence of new employees and organization involved in innovation activity. The importance of organizational memory for the new employees can be explained by the fact that it is through facing it they socialize in the organization. Organizational memory also influences innovative behaviour of employees. This researched is performed in a form of a case study; where the object of study finds itself in a combined context – new employees of R&D department learn from different organizational memory systems. The data for this case study were collected through qualitative interviewing of both the newcomers and their supervisor. The results show that new employees face a range of memory systems, and this range does not depend on the innovativeness of the work they are involved in. It was found possible to look separately into the systems and methods the newcomers accessed them. The most important access methods in this case turned out to be personal communication and IT-enabled means, however a number of other methods were also found relevant for the case. Focusing on how this knowledge can support innovative behaviour of the new employees, this study has found several ways in which both incremental and radical innovations can be enhanced. The memory systems have been found to affect innovative behaviour of the newcomers by demonstrating expectance of this behaviour, by providing “old” knowledge, as well as hints where one can possibly find “old” and “new” knowledge. In general, the findings suggest that looking into memory systems separately from the ways to access them might give valuable insights for rethinking how properties of the memory systems have been defined so far.
|
63 |
Information Theoretic Evaluation of Change Prediction Models for Large-Scale SoftwareAskari, Mina January 2006 (has links)
During software development and maintenance, as a software system evolves, changes are made and bugs are fixed in various files. In large-scale systems, file histories are stored in software repositories, such as CVS, which record modifications. By studying software repositories, we can learn about open source software development rocesses. Knowing where these changes will happen in advance, gives power to managers and developers to concentrate on those files. Due to the unpredictability in software development process, proposing an accurate change prediction model is hard. It is even harder to compare different models with the actual model of changes that is not available. <br /><br /> In this thesis, we first analyze the information generated during the development process, which can be obtained through mining the software repositories. We observe that the change data follows a Zipf distribution and exhibits self-similarity. Based on the extracted data, we then develop three probabilistic models to predict which files will have changes or bugs. One purpose of creating these models is to rank the files of the software that are most susceptible to having faults. <br /><br /> The first model is Maximum Likelihood Estimation (MLE), which simply counts the number of events i. e. , changes or bugs that occur in to each file, and normalizes the counts to compute a probability distribution. The second model is Reflexive Exponential Decay (RED), in which we postulate that the predictive rate of modification in a file is incremented by any modification to that file and decays exponentially. The result of a new bug occurring to that file is a new exponential effect added to the first one. The third model is called RED Co-Changes (REDCC). With each modification to a given file, the REDCC model not only increments its predictive rate, but also increments the rate for other files that are related to the given file through previous co-changes. <br /><br /> We then present an information-theoretic approach to evaluate the performance of different prediction models. In this approach, the closeness of model distribution to the actual unknown probability distribution of the system is measured using cross entropy. We evaluate our prediction models empirically using the proposed information-theoretic approach for six large open source systems. Based on this evaluation, we observe that of our three prediction models, the REDCC model predicts the distribution that is closest to the actual distribution for all the studied systems.
|
64 |
Evidence-based Software Process RecoveryHindle, Abram 20 October 2010 (has links)
Developing a large software system involves many complicated, varied, and
inter-dependent tasks, and these tasks are typically implemented using a
combination of defined processes, semi-automated tools, and ad hoc
practices. Stakeholders in the development process --- including software
developers, managers, and customers --- often want to be able to track the
actual practices being employed within a project. For example, a customer
may wish to be sure that the process is ISO 9000 compliant, a manager may
wish to track the amount of testing that has been done in the current
iteration, and a developer may wish to determine who has recently been
working on a subsystem that has had several major bugs appear in it.
However, extracting the software development processes from an existing
project is expensive if one must rely upon manual inspection of artifacts
and interviews of developers and their managers. Previously, researchers
have suggested the live observation and instrumentation of a project to
allow for more measurement, but this is costly, invasive, and also requires
a live running project.
In this work, we propose an approach that we call software process
recovery that is based on after-the-fact analysis of various kinds of
software development artifacts. We use a variety of supervised and
unsupervised techniques from machine learning, topic analysis, natural
language processing, and statistics on software repositories such as version
control systems, bug trackers, and mailing list archives. We show how we can
combine all of these methods to recover process signals that we map back to
software development processes such as the Unified Process. The Unified
Process has been visualized using a time-line view that shows effort per
parallel discipline occurring across time. This visualization is called the
Unified Process diagram. We use this diagram as inspiration to produce
Recovered Unified Process Views (RUPV) that are a concrete version of this
theoretical Unified Process diagram. We then validate these methods using
case studies of multiple open source software systems.
|
65 |
Internet based PPGIS for public involved spatial decision making /Liu, Zhengrong. January 2007 (has links)
Thesis (M.Sc.)--York University, 2007. Graduate Programme in Earth and Space Science. / Typescript. Includes bibliographical references (leaves 149-159). Also available on the Internet. MODE OF ACCESS via web browser by entering the following URL: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&res_dat=xri:pqdiss&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&rft_dat=xri:pqdiss:MR38802
|
66 |
Opening access to scholarly researchColenbrander, Hilde, Morrison, Heather, Waller, Andrew 06 June 2008 (has links)
This presentation provides a basic description of open access and a very brief description of the crisis in scholarly communications which created the need for expanded access. Open access initiatives in western Canada (British Columbia, Alberta) are discussed, including institutional repository developments, the Public Knowledge Project's Open Journal Systems, and digitization projects of academic, public, and special libraries.
|
67 |
Digital ‘Publishing’ Services at UBC Library: cIRcle and moreColenbrander, Hilde 30 April 2009 (has links)
The UBC Library along with many other research libraries is beginning to develop a range of publishing support services for faculty and students. This presentation focuses on cIRcle, the Library's institutional repository, and was delivered as part of a graduate seminar in the Dept of English on April 2, 2009.
|
68 |
Evidence-based Software Process RecoveryHindle, Abram 20 October 2010 (has links)
Developing a large software system involves many complicated, varied, and
inter-dependent tasks, and these tasks are typically implemented using a
combination of defined processes, semi-automated tools, and ad hoc
practices. Stakeholders in the development process --- including software
developers, managers, and customers --- often want to be able to track the
actual practices being employed within a project. For example, a customer
may wish to be sure that the process is ISO 9000 compliant, a manager may
wish to track the amount of testing that has been done in the current
iteration, and a developer may wish to determine who has recently been
working on a subsystem that has had several major bugs appear in it.
However, extracting the software development processes from an existing
project is expensive if one must rely upon manual inspection of artifacts
and interviews of developers and their managers. Previously, researchers
have suggested the live observation and instrumentation of a project to
allow for more measurement, but this is costly, invasive, and also requires
a live running project.
In this work, we propose an approach that we call software process
recovery that is based on after-the-fact analysis of various kinds of
software development artifacts. We use a variety of supervised and
unsupervised techniques from machine learning, topic analysis, natural
language processing, and statistics on software repositories such as version
control systems, bug trackers, and mailing list archives. We show how we can
combine all of these methods to recover process signals that we map back to
software development processes such as the Unified Process. The Unified
Process has been visualized using a time-line view that shows effort per
parallel discipline occurring across time. This visualization is called the
Unified Process diagram. We use this diagram as inspiration to produce
Recovered Unified Process Views (RUPV) that are a concrete version of this
theoretical Unified Process diagram. We then validate these methods using
case studies of multiple open source software systems.
|
69 |
Supporting Development Decisions with Software AnalyticsBaysal, Olga January 2014 (has links)
Software practitioners make technical and business decisions based on the understanding they have of their software systems. This understanding is grounded in their own experiences, but can be augmented by studying various kinds of development artifacts, including source code, bug reports, version control meta-data, test cases, usage logs, etc. Unfortunately, the information contained in these artifacts is typically not organized in the way that is immediately useful to developers’ everyday decision making needs. To handle the large volumes of data, many practitioners and researchers have turned to analytics — that is, the use of analysis, data, and systematic reasoning for making decisions.
The thesis of this dissertation is that by employing software analytics to various development tasks and activities, we can provide software practitioners better insights into their processes, systems, products, and users, to help them make more informed data-driven decisions. While quantitative analytics can help project managers understand the big picture of their systems, plan for its future, and monitor trends, qualitative analytics can enable developers to perform their daily tasks and activities more quickly by helping them better manage high volumes of information.
To support this thesis, we provide three different examples of employing software analytics. First, we show how analysis of real-world usage data can be used to assess user dynamic behaviour and adoption trends of a software system by revealing valuable information on how software systems are used in practice.
Second, we have created a lifecycle model that synthesizes knowledge from software development artifacts, such as reported issues, source code, discussions, community contributions, etc. Lifecycle models capture the dynamic nature of how various development artifacts change over time in an annotated graphical form that can be easily understood and communicated. We demonstrate how lifecycle models can be generated and present industrial case studies where we apply these models to assess the code review process of three different projects.
Third, we present a developer-centric approach to issue tracking that aims to reduce information overload and improve developers’ situational awareness. Our approach is motivated by a grounded theory study of developer interviews, which suggests that customized views of a project’s repositories that are tailored to developer-specific tasks can help developers better track their progress and understand the surrounding technical context of their working environments. We have created a model of the kinds of information elements that developers feel are essential in completing their daily tasks, and from this model we have developed a prototype tool organized around developer-specific customized dashboards.
The results of these three studies show that software analytics can inform evidence-based decisions related to user adoption of a software project, code review processes, and improved developers’ awareness on their daily tasks and activities.
|
70 |
Coupled Thermo-Hydro-Mechanical-Chemical (THMC) Responses of Ontario’s Host Sedimentary Rocks for Nuclear Waste Repositories to Past and Future Glaciations and DeglaciationsNasir, Othman 10 October 2013 (has links)
Glaciation is considered one of the main natural processes that can have a significant impact on the long term performance of DGRs. The northern part of the American continent has been subjected to a series of strong glaciation and deglaciation events over the past million years. Glacial cycles cause loading and unloading, temperature changes and hydraulic head changes at the ground surface. These changes can be classified as transient boundary conditions. It is widely accepted that the periodic pattern of past glacial cycles during the Late Quaternary period are resultant of the Earth’s orbital geometry changes that is expected to continue in the future. Therefore, from the safety perspective of DGRs, such probable events need to be taken into account. The objective of this thesis is to develop a numerical model to investigate the thermo-hydro-mechanical-chemical (THMC) coupled processes that have resulted from long term past and future climate changes and glaciation cycles on a proposed DGR in sedimentary rocks in southern Ontario. The first application is done on a large geological cross section that includes the entire Michigan basin by using a hydro-mechanical (HM) coupled process. The results are compared with field data of anomalous pore water pressures from deep boreholes in sedimentary rocks of southern Ontario. In this work. The modeling results seem to support the hypothesis that at least the underpressures in the Ordovician formation could be partially attributed to past glaciation. The second application is made on site conditions by using the THMC model. The results for the pore water pressure, tracer profiles, permafrost depth and effective stress profile are compared with the available field data, the results show that the solute transport in the natural limestone and shale barrier formations is controlled by diffusion, which provide evidence that the main mechanism of transport at depth is diffusion-dominant. The third application is made on site conditions to determine the effect of underground changes in DGRs due to DGR construction. The results show that future glaciation loads will induce larger increases in effective stresses on the shaft. Furthermore, it is found that hypothetical nuclide transport in a failed shaft can be controlled by diffusion and advection. The simulation results show that the solute transported in a failed shaft can reach the shallow bedrock groundwater zone. These results might imply that a failed shaft will substantially lose its effectiveness as a barrier. The fourth application is proposed to investigate the geochemical evolution of sedimentary host rock in a near field scale. In this part, a new thermo-hydro-mechanical-geochemical simulator (COMSOL-PHREEQC) is developed. It is anticipated that there will be a geochemical reaction within the host rock that results from interaction with the water enriched with the CO2 generated by nuclear waste.
|
Page generated in 0.0559 seconds