Spelling suggestions: "subject:"data deriven"" "subject:"data dcdriven""
241 |
O uso de fontes documentais no jornalismo guiado por dadosGehrke, Marília January 2018 (has links)
Estudar as fontes utilizadas nas notícias de jornalismo guiado por dados (JGD) é a proposta desta dissertação. Para tanto, revisita as classificações de fontes trabalhadas por teóricos da área e situa o contexto atual, derivado de transformações sociais e tecnológicas, sob a perspectiva de sociedade em rede e do jornalismo em rede. O foco do estudo está em descobrir quais fontes são acionadas em notícias do JGD, que emerge neste cenário a partir dos anos 2000. Analisa um corpus constituído por 60 notícias veiculadas nos jornais O Globo, The New York Times e La Nación, como veículos tradicionais, e Nexo, FiveThirtyEight e Chequeado, como veículos nativos. A partir do cruzamento entre a teoria e o estudo empírico, propõe a classificação de tipos de fontes nas notícias de JGD. São eles: arquivo documental, estatística e reprodução. Por meio dessa classificação, busca preencher uma lacuna no quadro teórico sobre fontes, superficialmente discutido no jornalismo até então, trazendo o uso de documentos como protagonista neste cenário. / Studying the news sources used in data-driven journalism (DDJ) practices is the proposal of this dissertation. The theoretical approach includes classifications of news sources already discussed in journalism studies. Considering the contemporary context, which is modified by social and technological transformations, this study operates from the networked society and network journalism perspectives. The main point is to detect the use of journalism sources in news developed by DDJ techniques, which emerges in this scenario during the 2000’s. It analyzes 60 news records published by O Globo, The New York Times and La Nación, as traditional media, and Nexo, FiveThirtyEight and Chequeado, as the native ones. Combining the theory and the empirical study, it proposes a classification by types of sources of DDJ news: documentary file, statistics and reproduction. Through this classification, it aims to fulfill a gap found in the theoretical sources approach, which is superficially discussed in journalism until now, bringing the use of documents as a protagonist in this scenario.
|
242 |
Blended Professional Development: Toward a Data-Informed Model of InstructionJanuary 2017 (has links)
abstract: Data and the use of data to make educational decisions have attained new-found prominence in K-12 education following the inception of high-stakes testing and subsequent linking of teacher evaluations and teacher-performance pay to students' outcomes on standardized assessments. Although the research literature suggested students' academic performance benefits were derived from employing data-informed decision making (DIDM), many educators have not felt efficacious about implementing and using DIDM practices. Additionally, the literature suggested a five-factor model of teachers' efficacy and anxiety with respect to using DIDM practices: (a) identification of relevant information, (b) interpretation of relevant information, (c) application of interpretations of data to their classroom practices, (d) requisite technological skills, and (e) comfort with data and statistics.
This action research study was designed to augment a program of support focused on DIDM, which was being offered at a K-8 charter school in Arizona. It sought to better understand the relation between participation in professional development (PD) modules and teachers' self-efficacy for using DIDM practices. It provided an online PD component, in which 19 kindergarten through 8th-grade teachers worked through three self-guided online learning modules, focused sequentially on (a) identification of relevant student data, (b) interpretation of relevant student data, and (c) application of interpretations of data to classroom practices. Each module concluded with an in-person reflection session, in which teachers shared artifacts they developed based on the modules, discussed challenges, shared solutions, and considered applications to their classrooms.
Results of quantitative data from pre- and post-intervention assessments, suggested the intervention positively influenced participants' self-efficacy for (a) identifying and (b) interpreting relevant student data. Qualitative results from eight semi-structured interviews conducted at the conclusion of the intervention indicated that teachers, regardless of previous experience using data, viewed DIDM favorably and were more able to find and draw conclusions from their data than they were prior to the intervention. The quantitative and qualitative data exhibited complementarity pointing to the same conclusions. The discussion focused on explaining how the intervention influenced participants' self-efficacy for using DIDM practices, anxiety around using DIDM practices, and use of DIDM practices. / Dissertation/Thesis / Doctoral Dissertation Leadership and Innovation 2017
|
243 |
Leveraging Customer Information in New Service Development : An Exploratory Study Within the Telecom IndustryBeijer, Sebastian, Magnusson, Per January 2018 (has links)
There is an increasing pressure on service firms to innovate and compete on new offerings. As our lives become more digitized through the ubiquitous connectivity by the usage of digital devices, companies are now able to collect vast amount of various data in real-time, and thus, know radically more about their customers. Companies could leverage on this growing body of data and developing relevant services based on customer demands accordingly. One industry compelled to benefit by utilizing customer information is the telecom industry due to fierce competition and a need of innovation in a saturated market. Hence, the purpose of this study is to investigate how telecom companies use customer information in their development process of new services by answering the research question: How do telecom companies use customer information within their New Service Development process? To illuminate this, a qualitative research was conducted on three Swedish telecom companies. The findings indicate that telecom companies possess a beneficial position since they are able to collect a vast amount of data about their customers due to the digital nature of their services. However, they struggle to efficiently integrate the data and seamlessly disseminate the obtained knowledge internally. Hence, leveraging customer information in new service development has not reached its full potential and how well it is incorporated is determined by the skills of key employees and their collaboration rather than deployed internal processes.
|
244 |
Projeto de controladores não lineares utilizando referência virtualNeuhaus, Tassiano January 2012 (has links)
Este trabalho tem o intuito, de apresentar alguns conceitos, relativos à identifi cação de sistemás, tanto lineares quanto não linearep, além da ideia de referência virtual para, em conjunto com a teoria de projeto "de controladores baseados em dados, propor uma forrha de projeto de controladores não lineares baseados em identificação de sistemas. A utilização de referência virtual para a obtenção dos sinais necessários para a caracterização do controlador ótimo de um sistema e utilizado no método VRFT (Virtual Reference Feedback Tuning). Este método serve como base para o desenvolvimento da proposta deste trabalho que, em conjunto com a teoria de identificação de sistemas não lineares, permite a obteriçãci do controlador ótimo que leva o sistema a se comportar como especificado em malha fechada. Em especial optou-se pela caracterização do controlador utilizando estrutura de modelos racional, por esta ser uma classe bastante abrangente no que - diz respeito à quantidade de sistemas reais que ela é capaz de descrever. Fara demonstrar o potencial do método proposto para projeto de controladores, são apresentados ecemplos ilustrativos em situações onde o controlador ideal consegue ser representado pela classe de modelos, e quando isso não é possível. / This work aims to present some concepts related to linear and nonlinear system identification, as well as the •concept of virtual reference that, together with data based controller design's theory, provides design framework for nonlinear controllers. The Virtual Reference Feedback Tuning method (VRFT) is used as a basis for the current proposal, where we propose to unite nonlinear system identification algorithms and virtual reference to obtain the ideal controller: the one which makes the system behave as desired in closed loop. It was choosen to model the controller as a rational model due the wide variety of practical systems that can be represented by this model structure. For rational system identification we used an iterative algorithm which, based on the signal from input and output of the pIant, allows to identify the parameters of the pre defined controller structure with the signals obtained by virtual reference. To demonstrate the operation of the proposed identification controller methodology, illustrative examples are presented in situations where the ideal controller can be represented by the class of modeIs, and also when it is not possible.
|
245 |
O uso de fontes documentais no jornalismo guiado por dadosGehrke, Marília January 2018 (has links)
Estudar as fontes utilizadas nas notícias de jornalismo guiado por dados (JGD) é a proposta desta dissertação. Para tanto, revisita as classificações de fontes trabalhadas por teóricos da área e situa o contexto atual, derivado de transformações sociais e tecnológicas, sob a perspectiva de sociedade em rede e do jornalismo em rede. O foco do estudo está em descobrir quais fontes são acionadas em notícias do JGD, que emerge neste cenário a partir dos anos 2000. Analisa um corpus constituído por 60 notícias veiculadas nos jornais O Globo, The New York Times e La Nación, como veículos tradicionais, e Nexo, FiveThirtyEight e Chequeado, como veículos nativos. A partir do cruzamento entre a teoria e o estudo empírico, propõe a classificação de tipos de fontes nas notícias de JGD. São eles: arquivo documental, estatística e reprodução. Por meio dessa classificação, busca preencher uma lacuna no quadro teórico sobre fontes, superficialmente discutido no jornalismo até então, trazendo o uso de documentos como protagonista neste cenário. / Studying the news sources used in data-driven journalism (DDJ) practices is the proposal of this dissertation. The theoretical approach includes classifications of news sources already discussed in journalism studies. Considering the contemporary context, which is modified by social and technological transformations, this study operates from the networked society and network journalism perspectives. The main point is to detect the use of journalism sources in news developed by DDJ techniques, which emerges in this scenario during the 2000’s. It analyzes 60 news records published by O Globo, The New York Times and La Nación, as traditional media, and Nexo, FiveThirtyEight and Chequeado, as the native ones. Combining the theory and the empirical study, it proposes a classification by types of sources of DDJ news: documentary file, statistics and reproduction. Through this classification, it aims to fulfill a gap found in the theoretical sources approach, which is superficially discussed in journalism until now, bringing the use of documents as a protagonist in this scenario.
|
246 |
Accuracy of Software Reliability Prediction from Different ApproachesVasudev, R.Sashin, Vanga, Ashok Reddy January 2008 (has links)
Many models have been proposed for software reliability prediction, but none of these models could capture a necessary amount of software characteristic. We have proposed a mixed approach using both analytical and data driven models for finding the accuracy in reliability prediction involving case study. This report includes qualitative research strategy. Data is collected from the case study conducted on three different companies. Based on the case study an analysis will be made on the approaches used by the companies and also by using some other data related to the organizations Software Quality Assurance (SQA) team. Out of the three organizations, the first two organizations used for the case study are working on reliability prediction and the third company is a growing company developing a product with less focus on quality. Data collection was by the means of interviewing an employee of the organization who leads a team and is in the managing position for at least last 2 years. / svra06@student.bth.se
|
247 |
Evaluating the use of ICN for Internet of thingsCarlquist, Johan January 2018 (has links)
The market of IOT devices continues to grow at a rapid speed as well as constrained wireless sensor networks. Today, the main network paradigm is host centric where a users have to specify which host they want to receive their data from. Information-centric networking is a new paradigm for the future internet, which is based on named data instead of named hosts. With ICN, a user needs to send a request for a perticular data in order to retrieve it. When sent, any participant in the network, router or server, containing the data will respond to the request. In order to achieve low latency between data creation and its consumption, as well as being able to follow data which is sequentially produced at a fixed rate, an algortihm was developed. This algortihm calculates and determines when to send the next interest message towards the sensor. It uses a ‘one time subscription’ approach to send its interest message in advance of the creation of the data, thereby enabling a low latency from data creation to consumption. The result of this algorithm shows that a consumer can retrieve the data with minimum latency from its creation by the sensor over an extended period of time, without using a publish/subscribe system such as MQTT or similar which pushes their data towards their consumers. The performance evaluation carried out which analysed the Content Centric Network application on the sensor shows that the application has little impact on the overall round trip time in the network. Based on the results, this thesis concluded that the ICN paradigm, together with a ’one-time subscription’ model, can be a suitable option for communication within the IoT domain where consumers ask for sequentially produced data.
|
248 |
Physician Practice Survival: The Role of Analytics in Shaping the FutureCulumber, Janene Jones 29 October 2017 (has links)
This dissertation joins an ongoing discussion in the business management and information technology literature surrounding the measurement of an organization’s business analytic capability, the benefits derived from maturing the capability and the improvements being made toward maturity. The dissertation specifically focuses on the healthcare industry in the United States and more specifically independent physician practices specializing in orthopaedics. After an extensive literature review along with expertise from industry leaders and experienced academic faculty, a survey instrument was developed to measure organizational capabilities, technology capabilities and people capabilities which together measured an organizations overall business analytic capability maturity. The survey instrument was delivered to 89 C-suite executives in the target population. A response rate of 36% was achieved resulting in a total of 32 completed responses.
The research study provides evidence that improving an organization’s business analytic capability leads to an improvement in the use of analytics to drive business performance. The research study also explored whether or not the use of analytics would improve business outcomes. The results were inconclusive. This could be due to the lag time between the use of analytics and business performance. In addition, the study did not have access to actual outcome data but rather asked the CEO’s whether or not performance in several areas had improved, remained stable or had declined. This measure may not have been precise enough to provide the predictive value needed. As such, this is an area that should be explored further. Finally, the research shows that over the past two years, physician practices have been focused on and successful in improving their business analytic capabilities. Despite these improvements, opportunities exist for physician practices to further their maturity, particularly in the areas of technology capabilities and people capabilities.
|
249 |
Defining Data Science and Data ScientistDedge Parks, Dana M. 29 October 2017 (has links)
The world’s data sets are growing exponentially every day due to the large number of devices generating data residue across the multitude of global data centers. What to do with the massive data stores, how to manage them and defining who are performing these tasks has not been adequately defined and agreed upon by academics and practitioners. Data science is a cross disciplinary, amalgam of skills, techniques and tools which allow business organizations to identify trends and build assumptions which lead to key decisions. It is in an evolutionary state as new technologies with capabilities are still being developed and deployed. The data science tasks and the data scientist skills needed in order to be successful with the analytics across the data stores are defined in this document. The research conducted across twenty-two academic articles, one book, eleven interviews and seventy-eight surveys are combined to articulate the convergence on the terms data science. In addition, the research identified that there are five key skill categories (themes) which have fifty-five competencies that are used globally by data scientists to successfully perform the art and science activities of data science.
Unspecified portions of statistics, technology programming, development of models and calculations are combined to determine outcomes which lead global organizations to make strategic decisions every day.
This research is intended to provide a constructive summary about the topics data science and data scientist in order to spark the dialogue for us to formally finalize the definitions and ultimately change the world by establishing set guidelines on how data science is performed and measured.
|
250 |
Data-driven smart mobility as an act to mitigate climate change, a case of HangzhouWang, Yulu January 2020 (has links)
The transport sector is responsible for a significant and growing proportion of greenhouse gas emissions. The urgent actions are required to take in the transport sector facing the challenge of growing global change. The major trends, including global urbanization, widespread application of digital technologies, and broad demand for sustainable development, have provided new opportunities for data-driven smart mobility in the future. This research aims to explore potentials of data-driven smart mobility in achieving Sustainable Development Goal 11.2, “provide access to safe, affordable, accessible and sustainable transport systems for all,” and Sustainable Development Goal 13.2, “take urgent action to combat climate change and its impacts” and “integrate climate change measures into national policies, strategies and planning” reducing greenhouse gas emissions every year. In order to meet this aim, this research explores the understandings and innovations of data-driven smart mobility in achieving decarbonization in urban, as well as barriers during the current practices. Hangzhou, as the capital city in Zhejiang Province in China, has been selected for the case study to examine data-driven smart mobility approaches. The research results show that the potentials of the data to tackle climate issues lie in the efficient transport operation and travel behaviors change. Data technologies have been widely applied to improve the integration of travel modes and the efficiency of transport management to reduce greenhouse gas emissions in road traffic. However, there are few drivers to mine data resources for travel behavior change. Moreover, data-driven smart mobility initiatives applied in urban areas involve multiple stakeholders but with limited access to data sharing and opening. Considering disruptive effects and potential promises brought by the big data technologies, the implementation of smart mobility requires for public data strategy with a holistic view of the complex urban challenges and global climate change.
|
Page generated in 0.0403 seconds