Spelling suggestions: "subject:"forminformation cience"" "subject:"forminformation cscience""
601 |
A Longitudinal Study of Privacy Awareness in the Digital Age and the Influence of KnowledgeWilliams, Therese L. 15 August 2017 (has links)
<p> Privacy, in the modern connected world, has become a much discussed topic in society ranging from privacy concerns to impacts, attitudes, practices and technologies. In today’s environment of vast social media and revelations of government spying, personal privacy is being highlighted as either non-existent or something that can be achieved to different degrees with knowledge or awareness of how our private information is collected and used. This research strives to answer the question <i>Using Alan Westin's privacy categories, what is the general awareness of privacy issues in social media and smartphone usage and how does it change when knowledge is provided over a fixed period of time?</i> A longitudinal study was conducted to collect data from 257 participants. Surprisingly, the percentages in each of the three categories: Privacy Fundamentalist, Privacy Pragmatist, and Privacy Unconcerned, are not significantly different from Westin’s last research in 2003. However, the results show, that with knowledge of what type of private information is collected and how it is used, the category of an individual is likely to change over time.</p><p>
|
602 |
Library as a collaborative partner in teaching and learning : role and Contribution of the library in e-learning at Monash UniversityMgquba, Sibusisiwe K. January 2015 (has links)
Technology enhanced learning has become one of the dominant modes of teaching and learning in higher education today. Indeed, it seems that no higher education institution can survive without embracing the new educational technologies that have come to define teaching and learning in the knowledge era. E-learning as such, has become one of the dominant forms of delivering teaching and learning content. Rooted in established pedagogical foundations, e-learning adopts the constructivist approach to teaching and learning which attributes the construction of knowledge to learner experiences. Thus learners construct their own understanding and knowledge through interaction with others. As universities adopt the e-learning approach, libraries are also forced to find ways to deliver their content in ways and in platforms where the new generation of students interact.
The focus of this research is to find out how Monash University Library has risen to the challenge of integrating its vast resources and services through the medium of e-learning, especially pertaining to the delivery of Information Research and Learning Skills (IRLS). The research aims to find out what the challenges, strengths and limitations are in this mode of information and content delivery. But the most pertinent question the study seeks to answer is “What is the effectiveness of e-learning in the provision of IRLS”.
The results of the study culminate in a few suggestions by the researcher which could be employed to better assess the effectiveness of e-learning in IRLS. / Mini Dissertation (MIT)--University of Pretoria, 2015. / Information Science / MIT / Unrestricted
|
603 |
Librarian Web-based training : an investigation into the Tshwane University of Technology’s Library and Information Services use of broadband in trainingBoucher, Belinda Elfriede January 2015 (has links)
Broadband is a critical success factor to improve overall living standards. This is especially the case for the development of skills through training. Broadband provided the human race with the ability to transfer data-intensive training material through the Internet using web-based training tools and technologies, such as video and video tutorials; Web 2.0, such as Facebook, blogs and vlogs; live streaming, such as virtual classes, online conferencing and webinars.
After establishing that the Tshwane University of Technology (TUT) has the broadband capacity to utilise web-based training tools and technologies, this study then investigated the advantages and disadvantages of using these tools and technologies and their effect on staff development.
This study adopted a mixed method approach. Two questionnaires gathered both quantitative and qualitative data. TUT librarians were asked whether they use web-based training tools, and technologies and based on their experience, to indicate what tools and technologies do they use, , what do they experienced as advantages and disadvantages and, based on that, what are the effects of Web-based training on their personal development and on their institution. TUT online service and product suppliers were also asked whether they offer Web-based training facilities, Which Web-based training tools and technologies do they use for their training programmes, what advantages and disadvantages have they experienced when offering Web-based training, and what are the effect on librarians.
This study found that TUT librarians use broadband to conduct Web-based training using various tools and technologies. Web-based training opportunities are offered to TUT by most online service and product providers. This study identified various advantages and disadvantages of using Web-based training tools and technologies, and found that they definitely play a role in staff development and in the improvement of work quality and productivity. / Mini Dissertation (MIT)--University of Pretoria, 2015. / Information Science / MIT / Unrestricted
|
604 |
Gradient Descent for Optimization Problems With Sparse SolutionsChen, Hsieh-Chung 25 July 2017 (has links)
Sparse modeling is central to many machine learning and signal processing algorithms, because finding a parsimonious model often implicitly removes noise and reveals structure in data. They appear in applications such as feature selection, feature extraction, sparse support vector machines, sparse logistic regression, denoising, and compressive sensing. This raises a great interest in solving optimization problems with sparse solutions.
There has been substantial interest in sparse optimization in the last two decades. Out of the various approaches, the gradient descent methods and the path following methods have been most successful. Existing path following methods are mostly designed for specific problems. Gradient descent methods are more general, but they do not explicitly leverage the fact that the solution is sparse.
This thesis develops the auxiliary sparse homotopy (ASH) method for gradient de- scent, which is designed to converge quickly to answers with few non-zero components by maintaining sparse interim state while making sufficient descent. ASH modifies gradient methods by applying an auxiliary sparsity constraint that relaxes dynamically overtime. This principle is applicable to general gradient descent methods, including accelerated proximal gradient descent, coordinate descent, and stochastic gradient descent.
For sparse optimization problems, ASH modified algorithms converge faster than the unmodified counterparts, while inheriting their convergence guarantees and flexibility in handling various regularization functions. We demonstrate the advantages of ASH in several applications. Even though some of these problems (notably LASSO) have attracted many dedicated solvers over the years, we find that ASH is very competitive against the state-of-the-art for all these applications in terms of convergence speed and cost per-iteration. / Engineering and Applied Sciences - Computer Science
|
605 |
A Sequential Exploratory Mixed Methods Study of Carnegie Libraries and the Library Profession, 1900-1910Schuster, Kristen M. 21 July 2017 (has links)
<p> Andrew Carnegie’s philanthropy made it possible for thousands of communities in the United States (U.S) to build free public libraries. Contemporary scholarship in library and information science (LIS) that deals with Carnegie’s philanthropy tends to place emphasis on generalized historical ideals associated with the construction of public libraries. As a result, it often fails to critically inquire into the relationships between the work performed by librarians and assumptions about the cultural value of Carnegie libraries. This dissertation investigates broad trends in library history in order to better understand the particular experiences of fifteen Midwestern communities that built public libraries with Andrew Carnegie’s money in the first decade of 20<sup> th</sup> century. Mixed methods research supports the synthesis of broad qualitative data with specific quantitative data, which supports assessments of primary sources in relation to scholarship about the library profession and Carnegie’s philanthropy. Comparing and contrasting findings from two distinct data sets makes it possible to discuss idiosyncrasies architectural trends and to better understand the role professional rhetoric played in their development within a specific geographic region (the Midwest).</p><p>
|
606 |
Scheduling of parallel matrix computations and data layout conversion for HPC and Multi-Core ArchitecturesKarlsson, Lars January 2011 (has links)
Dense linear algebra represents fundamental building blocks in many computational science and engineering applications. The dense linear algebra algorithms must be numerically stable, robust, and reliable in order to be usable as black-box solvers by expert as well as non-expert users. The algorithms also need to scale and run efficiently on massively parallel computers with multi-core nodes. Developing high-performance algorithms for dense matrix computations is a challenging task, especially since the widespread adoption of multi-core architectures. Cache reuse is an even more critical issue on multi-core processors than on uni-core processors due to their larger computational power and more complex memory hierarchies. Blocked matrix storage formats, in which blocks of the matrix are stored contiguously, and blocked algorithms, in which the algorithms exhibit large amounts of cache reuse, remain key techniques in the effort to approach the theoretical peak performance. In Paper I, we present a packed and distributed Cholesky factorization algorithm based on a new blocked and packed matrix storage format. High performance node computations are obtained as a result of the blocked storage format, and the use of look-ahead leads to improved parallel efficiency. In Paper II and Paper III, we study the problem of in-place matrix transposition in general and in-place matrix storage format conversion in particular. We present and evaluate new high-performance parallel algorithms for in-place conversion between the standard column-major and row-major formats and the four standard blocked matrix storage formats. Another critical issue, besides cache reuse, is that of efficient scheduling of computational tasks. Many weakly scalable parallel algorithms are efficient only when the problem size per processor is relatively large. A current research trend focuses on developing parallel algorithms which are more strongly scalable and hence more efficient also for smaller problems. In Paper IV, we present a framework for dynamic node-scheduling of two-sided matrix computations and demonstrate that by using priority-based scheduling one can obtain an efficient scheduling of a QR sweep. In Paper V and Paper VI, we present a blocked implementation of two-stage Hessenberg reduction targeting multi-core architectures. The main contributions of Paper V are in the blocking and scheduling of the second stage. Specifically, we show that the concept of look-ahead can be applied also to this two-sided factorization, and we propose an adaptive load-balancing technique that allow us to schedule the operations effectively. / Matrisberäkningar är fundamentala byggblock imånga beräkningstunga teknisk-vetenskapliga applikationer. Algoritmerna måste vara numeriskt stabila och robusta för att användaren ska kunna förlita sig på de beräknade resultaten. Algoritmerna måste dessutom skala och kunna köras effektivt på massivt parallella datorer med noder bestående av flerkärniga processorer. Det är utmanande att uveckla högpresterande algoritmer för täta matrisberäkningar, särskilt sedan introduktionen av flerkärniga processorer. Det är ännu viktigare att återanvända data i cache-minnena i en flerkärnig processor på grund av dess höga beräkningsprestanda. Två centrala tekniker i strävan efter algoritmer med optimal prestanda är blockade algoritmer och blockade matrislagringsformat. En blockad algoritm har ett minnesåtkomstmönster som passar minneshierarkin väl. Ett blockat matrislagringsformat placerar matrisens element i minnet så att elementen i specifika matrisblock lagras konsekutivt. I Artikel I presenteras en algoritm för Cholesky-faktorisering av en matris kompakt lagrad i ett distribuerat minne. Det nya lagringsformatet är blockat och möjliggör därigenom hög prestanda. Artikel II och Artikel III beskriver hur en konventionellt lagrad matris kan konverteras till och från ett blockat lagringsformat med hjälp av en ytterst liten mängd extra lagringsutrymme. Lösningen bygger på en ny parallell algoritm för matristransponering av rektangulära matriser. Vid skapandet av en skalbar parallell algoritm måste man även beakta hur de olika beräkningsuppgifterna schemaläggs på ett effektivt sätt. Många så kallade svagt skalbara algoritmer är effektiva endast för relativt stora problem. En nuvarande forskningstrend är att utveckla så kallade starkt skalbara algoritmer, vilka är mer effektiva även för mindre problem. Artikel IV introducerar ett dynamiskt schemaläggningssystem för två-sidiga matrisberäkningar. Beräkningsuppgifterna fördelas statiskt på noderna och schemaläggs sedan dynamiskt inom varje nod. Artikeln visar även hur prioritetsbaserad schemaläggning tar en tidigare ineffektiv algoritm för ett så kallat QR-svep och gör den effektiv. Artikel V och Artikel VI presenterar nya parallella blockade algoritmer, designade för flerkärniga processorer, för en två-stegs Hessenberg-reduktion. De centrala bidragen i Artikel V utgörs av en blockad algoritm för reduktionens andra steg samt en adaptiv lastbalanseringsmetod.
|
607 |
Envisioning a future decision support system for requirements engineering : A holistic and human-centred perspectiveAlenljung, Beatrice January 2008 (has links)
No description available.
|
608 |
Utilizing Image-based Formats to Optimize Pattern Data Format and Processing In Mask and Maskless Pattern Generation LithographyAbboud, Fayez 01 January 2012 (has links)
According to Moore's law, the IC (Integrated Circuit) minimum feature size is to shrink node over node, resulting in denser compaction of the design. Such compaction results in more polygons per design. The extension of optical lithography to print features at a fraction of the wavelength is only possible with the use of optical tricks, like RET (Resolution Enhancement Techniques) and ILT (Inverse Lithography Technology), to account for systematic corrections needed between the mask and the wafer exposure. Such optical tricks add extensive decorations and edge jogs to the primary features, creating even larger increases in the number of polygons per design. As the pattern file size increases, processing time and complexity becomes directly proportional to the number of polygons; such increase is now becoming one of the key obstacles in the data processing flow. Polygon-based or vector-based pattern file format has been extended for the past forty years, and now its applicability to modern designs and trends is in question.
Current polygon based data flow for IC pattern processing is cumbersome, inefficient, and prone to rounding and truncation errors. The original design starts with pixelated images with maximum edge definition accuracy. The curvilinear shapes are then fitted into polygons to comply with industry standard formats, thus losing edge definition accuracy. The polygons are then converted to raster images to approximate the original intended data.
This dissertation builds on the modern advancements in digital image and video processing to allow for a new image-based format, Sequential-Pixel-Frame, specifically for integrated circuit pattern representation. Unlike standard lossy compressed video, the new format contains all the information and accuracy intended for mask making and direct write. The new format is defined to replace the old historical polygon-based formats. In addition, the dissertation proposes a more efficient data flow from tape-out to mask making. The key advantages of the new format are a smaller file size and a reduced processing time for the more complex patterns intended for advanced technology nodes. However, the new format did not offer such advantages for the older technology nodes. This is in line with the goals and expectations of the research.
|
609 |
CrashApp(TM)--Concurrent Multiple Stakeholder Evaluation of a DSR ArtefactPapp, Timothy M. 05 December 2017 (has links)
<p> The successful design, implementation, deployment, and use of mobile software applications is rare. While many mobile apps are developed, few succeed. This design science research project builds and evaluates CrashApp™, a mobile application that connects lawyers and clients before, during, and after car accidents. The effective, widespread use of this app depends on satisfying the needs of three groups of stakeholders—the end-users (clients), the owners (lawyers), and the software developers. The research objective is to investigate the key differences among the three stakeholder groups on evaluation criteria for mobile app success. Evaluation strategies and methods are selected to collect data that measures each group’s satisfaction with the constructed application artefact. Research contributions are the identification of multiple stakeholder groups and the ability to design rich evaluation strategies that provide measures of application success. Practice contributions are the design and development of a useful mobile app that provides needed services to the client and effective client connections for the law firm to interact with the clients. The project produced an instantiation of the design artefact CrashApp™ mobile application, which was evaluated with a naturalistic evaluation approach, including the following methods and techniques: focus groups, focused surveys, usability surveys, and real life tests and assessments.</p><p>
|
610 |
Defining Data Science and Data ScientistDedge Parks, Dana M. 03 January 2018 (has links)
<p> The world’s data sets are growing exponentially every day due to the large number of devices generating data residue across the multitude of global data centers. What to do with the massive data stores, how to manage them and defining who are performing these tasks has not been adequately defined and agreed upon by academics and practitioners. Data science is a cross disciplinary, amalgam of skills, techniques and tools which allow business organizations to identify trends and build assumptions which lead to key decisions. It is in an evolutionary state as new technologies with capabilities are still being developed snd deployed. The data science tasks and the data scientist skills needed in order to be successful with the analytics across the data stores are defined in this document. The research conducted across twenty-two academic articles, one book, eleven interviews and seventy-eight surveys are combined to articulate the convergence on the terms data science. In addition, the research identified that there are five key skill categories (themes) which have fifty-five competencies that are used globally by data scientists to successfully perform the art and science activities of data science. </p><p> Unspecified portions of statistics, technology programming, development of models and calculations are combined to determine outcomes which lead global organizations to make strategic decisions every day. </p><p> This research is intended to provide a constructive summary about the topics data science and data scientist in order to spark the dialogue for us to formally finalize the definitions and ultimately change the world by establishing set guidelines on how data science is performed and measured. </p><p>
|
Page generated in 0.1311 seconds