Spelling suggestions: "subject:"1echnology off intelligence"" "subject:"1echnology oof intelligence""
1 |
Digitization, Innovation, and Participation| Digital Conviviality of the Google Cultural InstituteStone, Leah 26 September 2018 (has links)
<p> The Frightful Five—Amazon, Apple, Facebook, Microsoft and Alphabet, the parent company of Google—shape the way data are generated and distributed across digital space (Manjoo, 2017). Through their technologies and increase in scope and scale, these titans provide new ways for people to create, find, and share information online. And, with such control, they have continued as well as expanded their reign over information commerce, changing the way that people and technology interact. In this way, tech giants act as gatekeepers over data, as well as serve as all-mighty-creators over technologies that arguably <i>act on</i> humans. </p><p> To explain, debates over whether or not technologies are employing “computational agency” (Tufecki, 2015, p. 207) have developed. One of these disputes is commonly referred to as the Great Artificial Intelligence (AI) Debate, and is currently being publicly argued between two of the most prominent tech titans: Elon Musk, founder of Tesla and SpaceX, and Mark Zuckerberg, founder of Facebook (Narkar, 2017). On one side of the AI argument, sits tech mogul Musk, who is crying for regulatory restrictions over AI and painting doomsday pictures of robots killing humans. Conversely, on the other side of the dispute, sits tech giant Zuckerberg, who claims AI will enhance society as it makes the world a better place. </p><p> This great AI debate underscores what Illich (1973) described as organizations that practice in convivial versus non-convivial ways. In other words, as tech titans are continuing to advance technology, it can be argued that they are operating in convivial ways as they enhance society through their participatory tools that work <i>with</i> humans to complete a task. Alternatively, it can be debated that technology organizations may be functioning in non-convivial ways as they manipulate society for the sake of their technologies. And, while these technologies may be <i>participating with</i> humans (convivial) to complete a task, they may actually be <i>working for</i> and/or acting on humans (non-convivial) to do an activity. </p><p> The purpose of this dissertation was to establish a unique approach to studying the conviviality of technology titans and how they organize digital space, a concept the researcher coined as digital conviviality. <i>Digital conviviality</i> is when a technology company operates in digital convivial ways such that it: (a) builds tools for digital communication; (b) has a value proposition that, while aimed at generating a profit, is also focused on using its technology to enhance society, instead of manipulating society for the sake of its technologies; and (c) designs technological tools that <i> work with</i> humans, instead of tools that <i>work for</i> humans or tools that <i>act on</i> humans, to accomplish a task. To further understand this conception of digital conviviality, an investigation was piloted into a tech titan that arguably claims to promote digital conviviality at its core: Google. </p><p> Using Illich’s (1973) notion of conviviality as a guide, an exploration into Google’s approach to convivial technologies was conducted. This study sought to understand Google’s ability to shape information in the arts and culture space. Through its Google Cultural Institute (GCI) and Google Arts & Culture (GAC) initiatives, Google focused on “democratizing access to the world’s culture” (Google CI Chromecast, 2014, 00:44). In this way, the study aimed to answer the overarching question: in what ways is the GCI considered a digital convivial company, and conversely, in what ways is it not? Based on this, an explication of the concept of digital conviviality and a framework for studying such things were developed. </p><p> Drawing from several disciplines, methodologies, and theoretical frameworks (e.g., science and technology, posthumanism, actor-network theory, design science in information systems, business models, digital methods, and convivial studies), a body of theory was gathered together, synthesized, and enhanced. Next, the collected information was used to assemble and create a new methodological strategy called digital convivial tracking with a design science (DS) approach and actor-network theory (ANT) mindset. Digital convivial tracking employs traditional qualitative methods, as well as innovative digital methods, to trace important objects throughout a digital ecosystem. Because the GCI digitizes the world’s arts and culture, the iconic <i>The Starry Night</i> painting by Vincent van Gogh (1889d) was selected as the object to track across the institute’s ecosystem. This process helped identify the GCI’s complex and entangled business model, as well as its technological innovations. (Abstract shortened by ProQuest.)</p><p>
|
2 |
Organizational factors contributing to an effective information technology intelligence systemTaskov, Konstantin. Vedder, Richard Glen, January 2008 (has links)
Thesis (Ph. D.)--University of North Texas, Dec., 2008. / Title from title page display. Includes bibliographical references.
|
3 |
Evolving expert knowledge bases: Applications of crowdsourcing and serious gaming to advance knowledge development for intelligent tutoring systemsFloryan, Mark 01 January 2013 (has links)
This dissertation presents a novel effort to develop ITS technologies that adapt by observing student behavior. In particular, we define an evolving expert knowledge base (EEKB) that structures a domain's information as a set of nodes and the relationships that exist between those nodes. The structure of this model is not the particularly novel aspect of this work, but rather the model's evolving behavior. Past efforts have shown that this model, once created, is useful for providing students with expert feedback as they work within our ITS called Rashi. We present an algorithm that observes groups of students as they work within Rashi, and collects student contributions to form an accurate domain level EEKB. We then present experimentation that simulates more than 15,000 data points of real student interaction and analyzes the quality of the EEKB models that are produced. We discover that EEKB models can be constructed accurately, and with significant efficiency compared to human constructed models of the same form. We are able to make this judgment by comparing our automatically constructed models with similar models that were hand crafted by a small team of domain experts. We also explore several tertiary effects. We focus on the impact that gaming and game mechanics have on various aspects of this model acquisition process. We discuss explicit game mechanics that were implemented in the source ITS from which our data was collected. Students who are given our system with game mechanics contribute higher amounts of data, while also performing higher quality work. Additionally, we define a novel type of game called a knowledge-refinement game (KRG), which motivates subject matter experts (SMEs) to contribute to an already constructed EEKB, but for the purpose of refining the model in areas in which confidence is low. Experimental work with the KRG provides strong evidence that: 1) the quality of the original EEKB was indeed strong, as validated by KRG players, and 2) both the quality and breadth of knowledge within the EEKB are increased when players use the KRG.
|
4 |
An analysis of explanation and its implications for the design of explanation plannersSuthers, Daniel Derwent 01 January 1993 (has links)
The dissertation provides an analysis of how the content and organization of explanations function to achieve communicative goals under potentially conflicting constraints, and applies this analysis to the design of a planner for generation of explanations by computer. An implementation of this planner as a multimedia question answering system is described. The functional analysis has four major subparts: (1) A theory of the kinds of knowledge that can provide the basis for "informatively satisfying" responses to a given question. (2) A theory of context sensitive constraints on the choice between alternate domain models that compete as the basis for answering a given question. (3) A theory of how supplemental explanations aid the comprehension and retention of the primary explanation. (4) A theory of how the sequencing of the parts of an explanation enhances the communicative functionality of those parts. The functional aspects of explanation just outlined imply a variety of explanation planning subtasks having distinct information processing requirements. A planning architecture is presented that matches these planning subtasks to appropriate mechanisms: (1) Top-down goal refinement translates queries into specifications of relevant knowledge on which a response can be based. (2) Prioritized preferences restrict competing domain models to those that are expected to be both informative and comprehensible to the questioner at a given point in the dialogue. (3) Plan critics examine the evolving plan and post new goals to supplement the explanation as needed. (4) A constrained graph traversal mechanism sequences the parts of an explanation in a manner respecting certain functional relationships between the parts. Contributions include: (1) the clarification and integration of a variety of functional aspects of explanatory text, (2) an analysis of the roles and limitations of various explanation planning mechanisms, (3) the design of a flexible explanation planner that applies various constraints on explanation independently of each other, and (4) an approach to selection between multiple domain models that is more general than previous approaches. Together these contributions clarify the correspondence between knowledge about communication, planning tasks, and types of discourse structure and provide improved interactive explanation capabilities.
|
5 |
Exploring the potential of knowledge engineering and HyperCard for enhancing teaching and learning in mathematicsLaLonde, Donna Elizabeth 01 January 1991 (has links)
This study adapted the knowledge engineering process from expert systems research and used it to acquire the combined knowledge of a mathematics student and a mathematics teacher. The knowledge base acquired was used to inform the design of a hypercard learning environment dealing with linear and quadratic functions. The researcher, who is also a mathematics teacher, acted as both knowledge engineer and expert. In the role of knowledge engineer, she conducted sixteen sessions with a student-expert. The purpose of the knowledge engineering sessions was to acquire an explicit representation of the student's expertise. The student's expertise was her view of mathematical concepts as she understood them. The teacher also made explicit her understanding of the same mathematical concepts discussed by the student. A graphical representation of the knowledge of both student and teacher was developed. This knowledge base informed the design of a hypercard learning environment on functions. Three major implications for teaching and learning emerged from the research. First, the teacher as knowledge engineer is a compelling new way to conceptualize the teacher's role. In the role of knowledge engineer, the teacher develops an understanding of the student's knowledge base which can inform curriculum. Second, recognizing the student as expert allows the student to be a more active participant in the learning process. Finally, hypercard is an appropriate and promising application for the development of knowledge based systems which will encourage the active participation of teachers and students in the development of curriculum.
|
6 |
Building and testing theory on the role of IT in the relationship between power and performance implementing enterprise performance management in the organization /Wenger, Mitchell R. January 1900 (has links)
Thesis (Ph.D.)--Virginia Commonwealth University, 2009. / Prepared for: Dept. of Information Systems. Title from title-page of electronic thesis. Bibliography: leaves 168-174.
|
7 |
Learning Analytics from Research to Practice| A Content Analysis to Assess Information Quality on Product WebsitesSarmonpal, Sandra 19 December 2018 (has links)
<p> The purpose of this study was to examine and describe the nature of the research to practice gap in learning analytics applications in K12 educational settings. It was also the purpose of this study to characterize how learning analytics are currently implemented and understood. A secondary objective of this research was to advance a preliminary learning analytics implementation framework for practitioners. To achieve these purposes, this study applied quantitative content analysis using automated text analysis techniques to assess the quality of information provided on analytics-based product websites against learning analytics research. Because learning analytics implementations require adoption of analytical tools, characterizing content on analytics-based product websites provides insight into data practices in K12 schools and how learning analytics are practiced and understood. A major finding of this study was that learning analytics do not appear to be applied in ways that will improve learning outcomes for students as described by the research. A second finding was that policy influence expressed in the study corpus suggest competing interests within the current policy structure for K12 education settings. Keywords: quantitative content analysis, automated text analysis, learning analytics, big data, frameworks, educational technology, website content analysis </p><p>
|
8 |
Vysvětlení etické konvergence: Případ umělé inteligence / Explaining Ethics Convergence: The Case of Artificial intelligenceMiotto, Maria Lucia January 2020 (has links)
Maria Lucia Miotto Master Thesis Abstract in English Although more and more works are showing convergence between the many documents regarding the ethics of artificial intelligence, none of them has tried to explain the reasons for this convergence. The thesis here proposed is that the diffusion of these principles is due to the underlying action of an epistemic community that has promoted the spread and the adoption of these values. Then, through network analysis, this thesis describes the AI ethics epistemic community and its methods of value diffusion, testing for the most effective. Then, to test the first result, two case studies, representative of political opposites, the United States and the People Republic of China have been analysed to see which method of diffusion has worked the most. What seems evident is that scientific conferences remain a primary factor in the transmission of knowledge. However, particular attention must also be given to the role played by universities and research labs (also those of big tech-companies) because they have revealed to be great aggregators for the epistemic community and are increasing their centrality in the network.
|
9 |
O uso de tecnologias da inteligência para a gestão da demanda de produtos no ciberespaço: estudo de caso "Captare"Figueiredo, Karen Patrícia Reis 28 May 2009 (has links)
Made available in DSpace on 2016-04-29T14:23:49Z (GMT). No. of bitstreams: 1
Karen Patricia Reis Figueiredo.pdf: 8523616 bytes, checksum: b668b1fe109e2317bf83ec97fcc86387 (MD5)
Previous issue date: 2009-05-28 / The thesis entitled "The Use of Technologies of Intelligence on Demand Management of Cyberspace Products: Case Study 'Captare'", of the Master in Technologies of Intelligence and Digital Design - TIDD, focuses on how the industrial segments can come to know and understand, in a more effective way, the behavior of demand in retail. It was analyzed how intelligence softwares with a cognitive basis can collect, process, classify and interpret data generated from business to business transactions B2B. As a main informational means to be examined, within the case methodological line, it proposes the Captare Software used by industrial segments through an eMarketplace (Genexis.com). Captare is a information product (e-Business solution) supported by technology of algorithm intelligence in cyberspace with cognitive representation pattern. Its conception is based on the hybrid convergence of the multidisciplinarity of statistics, data mining, marketing, trade marketing, technology of information and hypermap, promoting interactive communication strategies of the triad "man", "interface" and "computer", turning it into a set to make the objectives happen, such as action, reaction, service or information / A dissertação intitulada O uso de tecnologias da inteligência para a gestão
da demanda de produtos no ciberespaço: estudo de caso Captare do
Mestrado em Tecnologias da Inteligência e Design Digital (TIDD) tem como
tema central a investigação de como os segmentos industriais podem vir a
conhecer e compreender de forma mais eficaz o comportamento da demanda
no varejo. Tal processo será levado pela análise de como os softwares de
inteligência com base cognitiva coletam, processam, classificam e interpretam
os dados gerados a partir de transações business-to-business (B2B). O
estudo propõe, como veículo informacional principal a ser investigado, dentro
da linha metodológica do case, o software Captare, utilizado pelos segmentos
industriais por meio de um eMarketplace (portal de negócios Genexis.com).
O Captare constitui-se em um produto de informação (solução de eBusiness)
suportado por tecnologia de inteligência algorítma em um ciberespaço, cuja
essência centra-se em modelos de representações cognitivas. Sua
concepção parte da convergência híbrida da multidisciplinaridade da
estatística, datamining, marketing, trademarketing, tecnologia da informação e
do hipermapa, possibilitando estratégias de comunicação interativa entre a
tríade homem", "interface" e "computador" , formando um conjunto para
fazer acontecer o fenômeno da comunicação a fim de alcançar objetivos,
sejam esses uma ação, reação, serviço ou informação
|
Page generated in 0.1273 seconds