231 |
Big Data och Hadoop : Nästa generation av lagringLindberg, Johan January 2017 (has links)
The goal of this report and study is to at a theoretical level determine the possi- bilities for Försäkringskassan IT to change platform for storage of data used in their daily activities. Försäkringskassan collects immense amounts of data ev- eryday containing personal information, lines of programming code, payments and customer service tickets. Today, everything is stored in large relationship databases which leads to problems with scalability and performance. The new platform studied in this report is built on a storage technology named Hadoop. Hadoop is developed to store and process data distributed in what is called clus- ters. Clusters that consists of commodity server hardware. The platform promises near linear scalability, possibility to store all data with a high fault tolerance and that it can handle massive amounts of data. The study is done through theo- retical studies as well as a proof of concept. The theory studies focus on the background of Hadoop, it’s structure and what to expect in the future. The plat- form being used at Försäkringskassan today is to be specified and compared to the new platform. A proof of concept will be conducted in a test environment at Försäkringskassan running a Hadoop platform from Hortonworks. Its purpose is to show how storing data is done as well as to show that unstructured data can be stored. The study shows that no theoretical problems have been found and that a move to the new platform should be possible. It does however move handling of the data from before storage to after. This is because todays platform is reliant on relationship databases that require data to be structured neatly to be stored. Hadoop however stores all data but require more work and knowledge to retrieve the data. / Målet med rapporten och undersökningen är att på en teoretisk nivå undersöka möjligheterna för Försäkringskassan IT att byta plattform för lagring av data och information som används i deras dagliga arbete. Försäkringskassan samlar på sig oerhörda mängder data på daglig basis innehållandes allt från personupp- gifter, programkod, utbetalningar och kundtjänstärenden. Idag lagrar man allt detta i stora relationsdatabaser vilket leder till problem med skalbarhet och prestanda. Den nya plattformen som undersöks bygger på en lagringsteknik vid namn Hadoop. Hadoop är utvecklat för att både lagra och processerna data distribuerat över så kallade kluster bestående av billigare serverhårdvara. Plattformen utlovar näst intill linjär skalbarhet, möjlighet att lagra all data med hög feltolerans samt att hantera enorma datamängder. Undersökningen genomförs genom teoristudier och ett proof of concept. Teoristudierna fokuserar på bakgrunden på Hadoop, dess uppbyggnad och struktur samt hur framtiden ser ut. Dagens upplägg för lagring hos Försäkringskassan specificeras och jämförs med den nya plattformen. Ett proof of concept genomförs på en testmiljö hos För- säkringskassan där en Hadoop plattform från Hortonworks används för att påvi- sa hur lagring kan fungera samt att så kallad ostrukturerad data kan lagras. Undersökningen påvisar inga teoretiska problem i att byta till den nya plattformen. Dock identifieras ett behov av att flytta hanteringen av data från inläsning till utläsning. Detta beror på att dagens lösning med relationsdatabaser kräver väl strukturerad data för att kunna lagra den medan Hadoop kan lagra allt utan någon struktur. Däremot kräver Hadoop mer handpåläggning när det kommer till att hämta data och arbeta med den.
|
232 |
Big Data, capacitações dinâmicas e valor para o negócio. / Big data, dynamic capabilities and business value.Michel Lens Seller 17 May 2018 (has links)
A conjunção das recentes tecnologias de mídias sociais, mobilidade e computação em nuvem coloca à disposição das empresas um grande volume de dados variados e recebidos em grande velocidade. Muitas empresas começam a perceber neste fenômeno, conhecido como Big Data, oportunidades de extração de valor para seus negócios. A literatura aponta diversos mecanismos pelos quais Big Data se transforma em valor para a empresa. O primeiro deles é pela geração de agilidade, aqui entendida como a capacidade de perceber e rapidamente reagir a mudanças e oportunidades em seu ambiente competitivo. Outro mecanismo é a utilização de Big Data como facilitador de capacitações dinâmicas que resultam em melhorias operacionais, por meio do aprofundamento (exploit) de alguma capacitação específica. Por fim, Big Data pode ser facilitador de capacitações dinâmicas que resultem em inovação (explore de novas capacitações) e no lançamento de novos produtos e serviços no mercado. Dentro deste contexto, o presente estudo se propõe a investigar a abordagem da utilização de Big Data por empresas inseridas em diferentes contextos competitivos e com diferentes níveis de capacitação de TI. Faz parte também do objetivo da pesquisa entender como as empresas adequaram seus processos de negócio para incorporar o grande volume de dados que têm à disposição. Por meio de estudos de caso realizados em empresas de grande porte de diferentes segmentos e com grande variabilidade na utilização de Big Data, o estudo verifica utilização de Big Data como viabilizador de capacitações dinâmicas atuando no aperfeiçoamento de capacitações operacionais, na diversificação de negócios e na inovação. Além disso, verifica-se a tendência de acoplamento de machine learning às soluções de Big Data, quando o objetivo é a obtenção de agilidade operacional. A capacitação de TI também se mostra determinante da quantidade e complexidade das ações competitivas lançadas pelas empresas com o uso de Big Data. Por fim, é possível antever que, graças às facilidades trazidas pela tecnologia de cloud, recursos de TI serão crescentemente liberados para atuação junto ao negócio - como, por exemplo, em iniciativas de Big Data - fortalecendo as capacitações dinâmicas da empresa e gerando vantagem competitiva. / The combination of the technologies of social media, mobility and cloud computing has dramatically increased the volume, variety and velocity of data available for firms. Many companies have been looking at this phenomenon, also known as Big Data, as a source of value to business. The literature shows different mechanisms for transforming Big Data in business value. First of them is agility, herein understood as the ability of sensing and rapidly responding to changes and opportunities in the competitive environment. Other mechanism is the usage of Big Data as an enabler of dynamic capabilities that result in operational improvements, through the deepening (exploit) of determined operational capability. Finally, Big Data can be the facilitator of dynamic capabilities that result in innovation (explore of new capabilities) and in the launching of new product and services in the market. Within this context, the goal of this study is to investigate the approach for Big Data usage in companies from different competitive scenarios and with different levels of IT capability. It is also part of the objectives to investigate how companies changed their processes to accommodate the huge volume of data available from Big Data. Through case studies in companies of different industries and with different Big Data approaches, the study shows Big Data as an enabler of dynamic capabilities that result in the improvement of operational capabilities, in the diversification of business and in innovation. It has also been identified the trend of association of machine learning to Big Data when the objective is operational agility. IT capability shows to be determinant of the quantity and complexity of the competitive actions launched from Big Data. To conclude, it is valid to anticipate that due to simplification coming from cloud technologies, IT resources will be increasingly released to working close to business - as, for example, in Big Data initiatives - strengthening dynamic capabilities and creating value to business.
|
233 |
Is nerd the new sexy? A study on the reception of the television series The Big Bang Theory / Is nerd the new sexy? Um estudo sobre a recepÃÃo da sÃrie televisiva The Big Bang TheorySoraya Madeira da Silva 06 June 2016 (has links)
nÃo hà / Esta pesquisa tem como objetivo investigar a relaÃÃo das pessoas com a sÃrie televisiva The Big Bang Theory e sua percepÃÃo a respeito de se considerarem ou serem consideradas nerds. Este grupo, durante muito tempo visto e retratado como pÃria da sociedade, vem ganhando fama nos Ãltimos anos e tem sua imagem reformulada nos meios midiÃticos. Este trabalho, em um primeiro momento, procura traÃar o perfil do nerd, analisando seu histÃrico, caracterÃsticas e representaÃÃes midiÃticas, em produtos como sÃries e filmes, para fazer uma reflexÃo sobre o que à ser nerd atualmente. Para esta avaliaÃÃo, os autores Nugent (2008), Goffman (1988), Fernando e Rios (2001) e Bourdieu (1983) sÃo usados para identificar as caracterÃsticas distintivas do grupo, sua estigmatizaÃÃo perante a sociedade e sua relaÃÃo com o consumo e a mÃdia. Em seguida, levanta-se uma discussÃo a respeito da conexÃo entre comunicaÃÃo e cultura, utilizando autores como Caune (2004), Thompson (2001), Schulman (2004) e Morley (1996), dentre outros, para ressaltar a importÃncia dos Estudos Culturais dentro do Ãmbito desta pesquisa. ProduÃÃo e consumo estÃo interligados quando analisamos produtos culturais veiculados em meios de comunicaÃÃo de massa, por isso sÃo analisados as sÃries televisivas, sua classificaÃÃo, relaÃÃo com o pÃblico e a importÃncia dos personagens que as compÃem como elementos de conexÃo entre produto e audiÃncia. Jost (2012), Esquenazi (2010), Seger (2006), Davis (2001) e Field (2001) sÃo utilizados para explanar os processos de produÃÃo de sÃries e de criaÃÃo de personagens, fundamentais para entender o sucesso da sÃrie televisiva americana The Big Bang Theory, exibida pela CBS (EUA) e pela Warner Channel (Brasil). ApÃs uma anÃlise detalha dos personagens destas sitcom, apresenta-se os resultados da pesquisa realizada para este trabalho. Como metodologia, um questionÃrio estruturado, com abordagem quantitativa e qualitativa, foi aplicado em uma amostra aleatÃria de 600 pessoas, com o objetivo de investigar seus hÃbitos de consumo, sÃries favoritas, conexÃo com os personagens, percepÃÃes acerca da sÃrie The Big Bang Theory e sua visÃo sobre considerarem-se ou serem considerados nerds por outras pessoas. Na conclusÃo desta pesquisa, relata-se que a relaÃÃo das pessoas com os produtos culturais que consomem à baseada por afetos e identificaÃÃo com o enredo e personagens da histÃria. Em relaÃÃo à sÃrie The Big Bang Theory, opiniÃes diversas sÃo apresentadas sobre a estereotipificaÃÃo dos personagens e evoluÃÃo da narrativa. Por fim, conclui-se que ser nerd, ou ser considerado assim, hoje em dia ainda à algo que carrega bastante negatividade para quem nÃo se insere no grupo, mas se torna um fator de empoderamento para quem se inclui. Esta identidade à construÃda atravÃs do alto consumo de produtos culturais que visam estabelecer uma conexÃo afetiva com essas pessoas e oferecer uma projeÃÃo da narrativa de suas vidas. / This research aims to investigate the relationship of people with the TV show The Big Bang
Theory and their perception as to whether they consider themselves or are considered nerds.
This group, which has long been seen and treated as a pariah of society, has gained fame in
recent years and had his image reformulated in the media. This work, in a first moment, seeks
to address the nerd profile, analyzing their history, characteristics and media representations
in products as TV series and movies, to make a reflection about what means to be nerd
currently. For this analysis, the authors Nugent (2008), Goffman (1988), Fernando and Rios
(2001) and Bourdieu (1983) are used to identify the group's distinguishing characteristics,
their stigmatization in society and its relationship with consumption and the media. Then, a
discussion about the connection between communication and culture is aroused, using authors
like Caune (2004), Thompson (2001), Schulman (2004) and Morley (1996), among others, to
highlight the importance of cultural studies within the scope of this research. Production and
consumption are intertwined when we look conveyed cultural products in mass media, so TV
series, their classification, public relationship and the importance of the characters that make
them up are analyzed as elements connecting product and audience. Jost (2012), Esquenazi
(2010), Seger (2006), Davis (2001) and Field (2001) are used to explain the production
processes of TV series and character creation, fundamentals to understand the success of an
American TV show The Big Bang Theory, displayed on CBS (EUA) and Warner Channel
(Brazil). After a detailed analyze of this sitcom's characters, the results of the research carried
out for this job are presented. As methodology, a structured survey, with a quantitative and a
qualitative approach was applied in a random sample of 600 person, with the purpose of
investigate their consuming habits, favorite TV series, connection with characters, perceptions
about The Big Bang Theory and their vision about consider themselves or be considered nerd
by others. At the conclusion of this research, it is reported that the relationship between people
and cultural products they consume is based on affect and identification with the plot and
characters in the story. Regarding The Big Bang Theory series, different opinions are
presented on the character stereotyping and narrative evolution. Finally, it's concluded that
being a nerd, or be considered as well, nowadays it is still something that carries a lot of
negativity for those who do not fall within the group, but becomes a empowering factor for
who is included. This identity is constructed through the high consumption of cultural
products aimed at establishing an emotional connection with these people and offering a
projection of the narrative of their lives.
|
234 |
Condicionantes do uso efetivo de big data e business analytics em organizações privadas: atitudes, aptidão e resultadosSANTOS, Ijon Augusto Borges dos 31 May 2016 (has links)
Submitted by Irene Nascimento (irene.kessia@ufpe.br) on 2017-04-10T18:23:48Z
No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Dissertação de Mestrado_PROPAD_UFPE_Ijon Santos.pdf: 3007544 bytes, checksum: c798b542d8e9f98334c33dbb694d633e (MD5) / Made available in DSpace on 2017-04-10T18:23:48Z (GMT). No. of bitstreams: 2
license_rdf: 1232 bytes, checksum: 66e71c371cc565284e70f40736c94386 (MD5)
Dissertação de Mestrado_PROPAD_UFPE_Ijon Santos.pdf: 3007544 bytes, checksum: c798b542d8e9f98334c33dbb694d633e (MD5)
Previous issue date: 2016-05-31 / A presente dissertação busca explicar os fatores condicionantes para a adoção efetiva de Big Data e Business Analytics por parte das Organizações Privadas de Pernambuco em termos de atitudes, aptidão e resultados. Para esse fim, um apanhado teórico-conceitual é reunido sobre o avanço no tráfego de dados na era da Revolução Digital e a predisposição das organizações em se apropriar das tecnologias compatíveis de informação e comunicação que transformam o modus faciendi e o modus pensandi da sociedade. No corpus de pesquisa se destacam duas teorias fundamentadoras: A Teoria da Mediação Cognitiva e a Teoria da Estruturação (base do Modelo de Estruturação de Tecnologia). Ambas exploradas no cerne da questão da dualidade tecnologia-uso, em que o convívio com artefatos tecnológicos em interação com as ações humanas inicia um processo mútuo de influência entre esses elementos, constituindo uma nova modalidade de mediação denominada Hipercultura. Em um método quantitativo de pesquisa, tais construtos serão relacionados entre si e investigados em 183 líderes estratégicos pernambucanos, além de comparados com indivíduos equivalentes de outras naturalidades e nacionalidades por meio de um formulário especialmente preparado. Os resultados obtidos indicam o nível de prontidão das empresas sobre este tema e a relação com o sucesso ou fracasso, quando considerados os níveis de hipercultura, de capacidade analítica e das condições de Tecnologias de Informação e Comunicação existentes nas empresas. Ao final do estudo, são levantados possíveis desdobramentos para os conceitos introduzidos. / The present dissertation seeks to explain the determining factors for the effective adoption of Big Data and Business Analytics on Pernambuco’s Private Organization in terms of attitudes, skills and results. For this purpose, a theoretical-conceptual caught is gathered about the progress in data traffic in the Digital Revolution age and the willingness of organizations to take ownership of supported technologies of information and communication that transform the modus faciendi and the modus pensandi of the society. In the research corpus stand two essential theories: The Cognitive Mediation Networks Theory and the Structuration Theory (base Structurational Model of Technology). Both explored the matter of duality-use technology, in which the interaction with technological artifacts interacting with human actions starts a process of mutual influence between these elements, constituting a new form of mediation called Hyperculture. In a quantitative search method, such constructs will be related to each other and investigated 183 strategic leaders from Pernambuco, and equivalents compared to individuals with other places of birth and nationality using a specially prepared form. The results may indicate the level of readiness of the companies on this issue and if there is, or not, relation with success or failure, when considering the hyperculture levels, analytical capacity and conditions of information and communication technologies in the existing companies. At the end of the study a several possible developments, implications, and applications for the concepts introduced are presented.
|
235 |
Personagens emolduradas: os discursos de gênero e sexualidade no Big Brother Brasil 10 / Characters in frames: Gender and Sexuality discourses in Big Brother Brasil 10ALMEIDA, Katianne de Sousa 16 September 2011 (has links)
Made available in DSpace on 2014-07-29T15:10:07Z (GMT). No. of bitstreams: 1
Dissertacao Katianne de Sousa Almeida.pdf: 1399650 bytes, checksum: 395c93604539ab6d4334c83142304239 (MD5)
Previous issue date: 2011-09-16 / Open your eyes and see. There is nothing more incredible than the act of visualizing the innumerous possibilities that are before our eyes: images, they are more than a combination of colors and forms, they produce meanings and, logically, symbols. From this production of images some questions can be raised, such as, what do they want to represent? This dissertation focuses the images transmitted on the television program Big Brother Brazil (BBB) in its tenth edition, which took place in 2010. When I started the research, the question that I, a television spectator and anthropologist, raised were: how are the discourses of femininity, masculinity and homosexuality reproduced? Above all, how the central model was reproduced hegemonic references? These references seem to subordinate, devalue and limit the plural existences of masculinity, femininity and homosexuality. Finally, I perceived that the fieldwork data did not only show the assumptions of normativity and the subjects were not completely framed, there were dislocations. Knowing that nothing is simple in the field of gender and sexuality, there were several tensions. The diversity proposed by the program had its limits. In proposing to denaturalize the BBB, I aimed to understand how an audiovisual product watched by the most varied social groups can influence, sediment or make one reflect on the formal conventions about gender and sexuality. / Abrir os olhos e ver. Não há nada mais incrível que o ato de enxergar as inúmeras possibilidades que estão diante dos nossos olhos: as imagens. Elas são mais que uma junção de cores e formas; produzem significado e, logicamente, símbolos. E desta produção de imagens pode-se fazer alguns questionamentos, como, o que elas querem representar? O interesse dessa dissertação permeou a análise do discurso das imagens transmitidas pelo programa de televisão Big Brother Brasil (BBB) em sua décima edição, ocorrida no ano de 2010. Ao iniciar a pesquisa, a questão que eu, telespectadora/antropóloga, levava comigo era: como se reproduziam os discursos da feminilidade, da masculinidade e da homossexualidade? Sobretudo, como se reproduzia o modelo central as referências hegemônicas? Essas referências pareciam subjugar, desvalorizar e limitar as existências plurais de masculinidade, feminilidade e homossexualidade. Ao final, percebi que os dados do campo não mostravam apenas os pressupostos da normatividade e os sujeitos não estavam completamente emoldurados; haviam deslocamentos. Sabendo que nada é simples no campo do gênero e da sexualidade, houveram diversas tensões. A diversidade proposta pelo programa teve o seu limite. Ao se propor desnaturalizar o BBB, pretendo compreender como um produto audiovisual assistido pelos mais variados grupos sociais pode influenciar, sedimentar ou colocar em reflexão as convenções formais quanto ao gênero e quanto à sexualidade.
|
236 |
Um estudo sobre o espaço de trabalho informativo e o acompanhamento em equipes ágeis de desenvolvimento de software / An study on informative workspaces and tracking in agile development teamsRenan de Melo Oliveira 24 January 2012 (has links)
Podemos encontrar em métodos ágeis como no Extreme Programming [Beck, 1999, Beck e Andres, 2006], no Scrum [Schwaber, 2008], no Crystal Clear [Cockburn, 2005], e no Lean Software Development [Poppendieck e Poppendieck, 2007] referências relacionadas à manipulação e disponibilização de métricas e outras informações no ambiente de desenvolvimento. Neste trabalho, estas atividades são consideradas como tarefas de acompanhamento ágil. Observamos em métodos ágeis a importância de se realizar ações (práticas) baseadas em alguns princípios como guidelines [Poppendieck e Poppendieck, 2007]. Por isto, realizamos uma análise bibliográca na literatura disponível para compreender princípios ágeis que possam afetar na execução deste tipo de tarefa, além de escrever sobre métricas no contexto de métodos ágeis e engenharia de software. Apesar da bibliograa, não encontramos pesquisas experimentais com o objetivo de levantar e (ou) compreender aspectos relacionados ao sucesso na aplicação deste tipo de tarefa em ambientes de desenvolvimento. Para isto, realizamos neste trabalho uma pesquisa experimental com este objetivo, utilizando uma abordagem de métodos mistos sequenciais de pesquisa [Creswell, 2009]. Escolhemos aplicar esta pesquisa em um conjunto de quinze equipes de desenvolvimento ágil, reunidas em realizações da disciplina Laboratório de Programação Extrema do IME-USP nos anos de 2010 e 2011. Esta pesquisa foi realizada em quatro fases sequenciais. Na primeira fase, realizamos sugestões para as equipes de desenvolvimento vinculadas ao acompanhamento ágil a m de levantar aspectos valiosos em sua aplicação utilizando uma abordagem baseada em pesquisa-ação [Thiollent, 2004]. Baseado nestes resultados, agrupamos alguns destes aspectos como heurísticas para o acompanhamento ágil, modelo similar ao de Hartmann e Dymond [2006]. Na segunda fase, aplicamos um questionário para vericar a validade das heurísticas levantadas. Na terceira fase, realizamos entrevistas semi-estruturadas com alguns integrantes destas equipes para compreender o por quê da validade das heurísticas levantadas, sendo analisadas com técnicas de teoria fundamentada em dados (grounded theory)[Strauss e Corbin, 2008]. Na quarta fase, reaplicamos o questionário da fase 2 em outro ambiente para triangulação da validade das heurísticas. Como resultado nal da pesquisa, estabelecemos um conjunto de heurísticas para o acompanhamento ágil, além de avaliações quantitativas de seus aspectos em dois ambientes, juntamente a diversas considerações qualitativas sobre sua utilização. Realizamos um mapeamento tanto das heurísticas como de seus conceitos relacionados à literatura disponível, identicando aspectos já existentes porém expandidos pela realização da pesquisa, e aspectos ainda não discutidos que podem ser considerados como novos na área. / It is possible to find on the agile methods several references related to managing and displaying relevant information in a software development worplace. These references are available in agile methods such as Extreme Programming [Beck, 1999, Beck e Andres, 2006], Scrum [Schwaber, 2008], Crystal Clear [Cockburn, 2005], Lean Software Development [Poppendieck e Poppendieck, 2007],etc. In our work, we name this kind of activity as agile tracking, relating it to the tracker role defined by Beck [1999]. We noticed the importance of performing actions (practices) based on a set of principles as guidelines [Poppendieck e Poppendieck, 2007], which is deeply associated with agile methods. Taking this matter into account, we performed a literature review in order to discuss a few agile principles that could affect the execution of agile tracking related tasks. We also describe a few works directly related to metrics, both on the agile methods and on the software engineering area in general. Even with related references in the literature, we could not find empirical researches with the goal of raising/understanding aspects related to successfully performing this kind of task on agile environments, which could be helpful on managing informations and informative workspaces. In order to accomplish this goal, we performed a research using a sequential mixed research methods approach [Creswell, 2009]. We chose to apply our research on a set of fifteen agile teams gathered on the IME-USP\'s \"Laboratory of Extreme Programming\" course in 2010 and 2011. This research was performed in four sequential phases. In the first phase, we made several suggestions to the agile teams, regarding agile tracking, using and approach based on action research [Thiollent, 2004]. We used this initial approach in order to gather relevant aspects of their use of agile tracking. Based on these results, we clustered some aspects as \"heuristics for agile tracking\", the same model used by Hartmann e Dymond [2006]. In phase two, we applied a survey to evaluate the validity of the proposed heuristics. In phase three, we gathered data from a few semi-structured interviews performed on team members in order to understand the reasons behind the proposed heuristics, in which we used grounded theory [Strauss e Corbin, 2008] coding techniques for analysis. In phase four, we reapplied phase two survey on a different environment in order to triangulate the heuristics evaluation data gathered on phase 2. As the result of this empirical research, a set of heuristics were established with quantitative evaluation data and several related qualitative concepts. We also relate the set of heuristics and associated concepts with other works in agile methods, highlighting aspects expanded by this research and some others that we could not directly find in the literature, which could be considered as new in the area.
|
237 |
Adoption of Big Data And Cloud Computing Technologies for Large Scale Mobile Traffic Analysis / L’adoption des technologies Big Data et Cloud Computing dans le cadre de l’analyse des données de trafic mobileRibot, Stephane 23 September 2016 (has links)
L’émergence des technologies Big Data et Cloud computing pour répondre à l’accroissement constant de la complexité et de la diversité des données constituent un nouvel enjeu de taille pour les entreprises qui, désormais, doivent prendre en compte ce nouveau paradigme. Les opérateurs de services mobiles sont un exemple de sociétés qui cherchent à valoriser et monétiser les données collectées de leur utilisateurs. Cette recherche a pour objectif d’analyser ce nouvel enjeu qui allie d’une part l’explosion du nombre des données à analyser, et d’autre part, la constante émergence de nouvelles technologies et de leur adoption. Dans cette thèse, nous abordons la question de recherche suivante: « Dans quelle mesure les technologies Cloud Computing et Big Data contribuent aux tâches menées par les Data Scientists? » Sur la base d’une approche hypothético-déductive relayée par les théories classiques de l’adoption, les hypothèses et le modèle conceptuel sont inspirés du modèle de l’adéquation de la tâche et de la technologie (TTF) de Goodhue. Les facteurs proposés incluent le Big Data et le Cloud Computing, la tâche, la technologie, l'individu, le TTF, l’utilisation et les impacts réalisés. Cette thèse aborde sept hypothèses qui adressent spécifiquement les faiblesses des modèles précédents. Une enquête a été conduite auprès de 169 chercheurs contribuant à l’analyse des données mobiles. Une analyse quantitative a été effectuée afin de démontrer la validité des mesures effectuées et d’établir la pertinence du modèle théorique proposé. L’analyse partielle des moindres carrés a été utilisée (partial least square) pour établir les corrélations entre les construits. Cette recherche délivre deux contributions majeures : le développement d'un construit (TTF) spécifique aux technologies Big Data et Cloud computing ainsi que la validation de ce construit dans le modèle d’adéquation des technologies Big data - Cloud Computing et de l’analyse des données mobiles. / A new economic paradigm is emerging as a result of enterprises generating and managing increasing amounts of data and looking for technologies like cloud computing and Big Data to improve data-driven decision making and ultimately performance. Mobile service providers are an example of firms that are looking to monetize the collected mobile data. Our thesis explores cloud computing determinants of adoption and Big Data determinants of adoption at the user level. In this thesis, we employ a quantitative research methodology and operationalized using a cross-sectional survey so temporal consistency could be maintained for all the variables. The TTF model was supported by results analyzed using partial least square (PLS) structural equation modeling (SEM), which reflects positive relationships between individual, technology and task factors on TTF for mobile data analysis.Our research makes two contributions: the development of a new TTF construct – task-Big Data/cloud computing technology fit model – and the testing of that construct in a model overcoming the rigidness of the original TTF model by effectively addressing technology through five subconstructs related to technology platform (Big Data) and technology infrastructure (cloud computing intention to use). These findings provide direction to mobile service providers for the implementation of cloud-based Big Data tools in order to enable data-driven decision-making and monetize the output from mobile data traffic analysis.
|
238 |
"Mentors' perception of the effectiveness of the Big Brother Big Sister mentor training programme"Jano, Rubina January 2008 (has links)
Magister Psychologiae - MPsych / Mentoring has gained a great deal of popularity across various professional fields and disciplines over the past few years. More recently, planned mentoring has become an important form of intervention with young people (Philip, 2003). Although mentoring can be an effective strategy for dealing with youth, the mentoring is only as good as the relationship that develops out of the process between mentors and mentees and the match that is made between the two parties. The number of mentor programmes that is running continues to grow yet the quality of these programmes remains unknown as this area lacks agreed upon sets of standards and / bench marks that could be used to determine the effectiveness of these programmes (Sipe, 1988 -1995). The primary aim of this study is to evaluate the mentors' perceptions of the effectiveness of a mentor training programme run by Big Brother Big Sister South Africa. / South Africa
|
239 |
Managing consistency for big data applications : tradeoffs and self-adaptiveness / Gérer la cohérence pour les applications big data : compromis et auto-adaptabilitéChihoub, Houssem Eddine 10 December 2013 (has links)
Dans l’ère de Big Data, les applications intensives en données gèrent des volumes de données extrêmement grand. De plus, ils ont besoin de temps de traitement rapide. Une grande partie de ces applications sont déployées sur des infrastructures cloud. Ceci est afin de bénéficier de l’élasticité des clouds, les déploiements sur demande et les coûts réduits strictement relatifs à l’usage. Dans ce contexte, la réplication est un moyen essentiel dans le cloud afin de surmonter les défis de Big Data. En effet, la réplication fournit les moyens pour assurer la disponibilité des données à travers de nombreuses copies de données, des accès plus rapide aux copies locales, la tolérance aux fautes. Cependant, la réplication introduit le problème majeur de la cohérence de données. La gestion de la cohérence est primordiale pour les systèmes de Big Data. Les modèles à cohérence forte présentent de grandes limitations aux aspects liées aux performances et au passage à l’échelle à cause des besoins de synchronisation. En revanche, les modèles à cohérence faible et éventuelle promettent de meilleures performances ainsi qu’une meilleure disponibilité de données. Toutefois, ces derniers modèles peuvent tolérer, sous certaines conditions, trop d’incohérence temporelle. Dans le cadre du travail de cette thèse, on s'adresse particulièrement aux problèmes liés aux compromis de cohérence dans les systèmes à large échelle de Big Data. Premièrement, on étudie la gestion de cohérence au niveau du système de stockage. On introduit un modèle de cohérence auto-adaptative (nommé Harmony). Ce modèle augmente et diminue de manière automatique le niveau de cohérence et le nombre de copies impliquées dans les opérations. Ceci permet de fournir de meilleures performances toute en satisfaisant les besoins de cohérence de l’application. De plus, on introduit une étude détaillée sur l'impact de la gestion de la cohérence sur le coût financier dans le cloud. On emploi cette étude afin de proposer une gestion de cohérence efficace qui réduit les coûts. Dans une troisième direction, on étudie les effets de gestion de cohérence sur la consommation en énergie des systèmes de stockage distribués. Cette étude nous mène à analyser les gains potentiels des reconfigurations adaptatives des systèmes de stockage en matière de réduction de la consommation. Afin de compléter notre travail au niveau système de stockage, on s'adresse à la gestion de cohérence au niveau de l’application. Les applications de Big Data sont de nature différente et ont des besoins de cohérence différents. Par conséquent, on introduit une approche de modélisation du comportement de l’application lors de ses accès aux données. Le modèle résultant facilite la compréhension des besoins en cohérence. De plus, ce modèle est utilisé afin de délivrer une cohérence customisée spécifique à l’application. / In the era of Big Data, data-intensive applications handle extremely large volumes of data while requiring fast processing times. A large number of such applications run in the cloud in order to benefit from cloud elasticity, easy on-demand deployments, and cost-efficient Pays-As-You-Go usage. In this context, replication is an essential feature in the cloud in order to deal with Big Data challenges. Therefore, replication therefore, enables high availability through multiple replicas, fast data access to local replicas, fault tolerance, and disaster recovery. However, replication introduces the major issue of data consistency across different copies. Consistency management is a critical for Big Data systems. Strong consistency models introduce serious limitations to systems scalability and performance due to the required synchronization efforts. In contrast, weak and eventual consistency models reduce the performance overhead and enable high levels of availability. However, these models may tolerate, under certain scenarios, too much temporal inconsistency. In this Ph.D thesis, we address this issue of consistency tradeoffs in large-scale Big Data systems and applications. We first, focus on consistency management at the storage system level. Accordingly, we propose an automated self-adaptive model (named Harmony) that scale up/down the consistency level at runtime when needed in order to provide as high performance as possible while preserving the application consistency requirements. In addition, we present a thorough study of consistency management impact on the monetary cost of running in the cloud. Hereafter, we leverage this study in order to propose a cost efficient consistency tuning (named Bismar) in the cloud. In a third direction, we study the consistency management impact on energy consumption within the data center. According to our findings, we investigate adaptive configurations of the storage system cluster that target energy saving. In order to complete our system-side study, we focus on the application level. Applications are different and so are their consistency requirements. Understanding such requirements at the storage system level is not possible. Therefore, we propose an application behavior modeling that apprehend the consistency requirements of an application. Based on the model, we propose an online prediction approach- named Chameleon that adapts to the application specific needs and provides customized consistency.
|
240 |
Uma análise comparativa de ambientes para Big Data: Apche Spark e HPAT / A comparative analysis for Big Data environments: Apache Spark and HPATRafael Aquino de Carvalho 16 April 2018 (has links)
Este trabalho compara o desempenho e a estabilidade de dois arcabouços para o processamento de Big Data: Apache Spark e High Performance Analytics Toolkit (HPAT). A comparação foi realizada usando duas aplicações: soma dos elementos de um vetor unidimensional e o algoritmo de clusterização K-means. Os experimentos foram realizados em ambiente distribuído e com memória compartilhada com diferentes quantidades e configurações de máquinas virtuais. Analisando os resultados foi possível concluir que o HPAT tem um melhor desempenho em relação ao Apache Spark nos nossos casos de estudo. Também realizamos uma análise dos dois arcabouços com a presença de falhas. / This work compares the performance and stability of two Big Data processing tools: Apache Spark and High Performance Analytics Toolkit (HPAT). The comparison was performed using two applications: a unidimensional vector sum and the K-means clustering algorithm. The experiments were performed in distributed and shared memory environments with different numbers and configurations of virtual machines. By analyzing the results we are able to conclude that HPAT has performance improvements in relation to Apache Spark in our case studies. We also provide an analysis of both frameworks in the presence of failures.
|
Page generated in 0.0575 seconds