1 |
El propósito del Data Value / Proyecto de ciencia de datos: Los retos de la implementaciónAguinaga, Jorge Alberto, Raza García, Mabel 25 November 2021 (has links)
Data Week UPC 2021 día 2 / Data WeeK UPC es un evento anual organizado por las Facultades de Negocios e Ingeniería, con el propósito de reunir a investigadores y expertos en la gestión empresarial para reflexionar acerca del papel de la Ciencia de Datos en la generación de valor en las organizaciones.
Nueve expositores de distintas instituciones se unirán a las 4 fechas del Data Week 2021 este 23, 25, 26 y 27 de noviembre, para reflexionar acerca de los retos en el proceso de la transformación de datos para la toma de decisiones.
No se pierdan la oportunidad de participar en este espacio en el que discutiremos las principales tendencias en cuanto a la aplicación de la ciencia de datos en la gestión empresarial.
7:00 PM EL PROPÓSITO DEL DATA VALUE
Todos podríamos convenir en que la moneda más útil y poderosa en los negocios es la data. Pero esa afirmación es incompleta pues la data debe ser puesta en valor para que tenga importancia. La data per se no tiene valor. En esta charla se mostrará -desde la experiencia- cómo poner en valor la data hacia la organización, los diversos obstáculos que aparecen y la manera de superarlos.
8:00 PM PROYECTOS DE CIENCIA DE DATOS: LOS RETOS DE LA IMPLEMEMENTACIÓN
En la actualidad, los retos de la ciencia de datos llegan en el momento de la implementación donde tenemos que definir en qué área de la organización nuestro modelos de analítica pueden agregar valor trabajando en colaboración con los stakeholders claves. En esta presentación compartiremos los retos y lecciones aprendidas en la implementación de proyectos en ciencias de datos.
|
2 |
HyperSpace: Data-Value Integrity for Securing SoftwareYom, Jinwoo 19 May 2020 (has links)
Most modern software attacks are rooted in memory corruption vulnerabilities. They redirect security-sensitive data values (e.g., return address, function pointer, and heap metadata) to an unintended value. Current state-of-the-art policies, such as Data-Flow Integrity (DFI) and Control-Flow Integrity (CFI), are effective but often struggle to balance precision, generality, and runtime overhead. In this thesis, we propose Data-Value Integrity (DVI), a new defense policy that enforces the integrity of "data value" for security-sensitive control and non-control data. DVI breaks an essential step of memory corruption based attacks by asserting the compromised security-sensitive data value. To show the efficacy of DVI, we present HyperSpace, a prototype that enforces DVI to provide four representative security mechanisms. These include Code Pointer Separation (DVI-CPS) and Code Pointer Integrity (DVI-CPI) based on HyperSpace. We evaluate HyperSpace with SPEC CPU2006 and real-world servers. We also test HyperSpace against memory corruption based attacks, including three real-world exploits and six attacks that bypass existing defenses. Our evaluation shows that HyperSpace successfully detects all attacks and introduces low runtime performance and memory overhead: 1.02% and 6.35% performance overhead for DVI-CPS and DVI-CPI, respectively, and overall approximately 15% memory overhead. / Master of Science / Many modern attacks originate from memory corruption vulnerabilities. These attacks, such as buffer overflow, allow an adversary to compromise a system by executing arbitrary code or escalating their access privilege for malicious actions. Unfortunately, this is due to today's common programming languages such as C/C++ being especially prone to memory corruption. These languages build the foundation of our software stack thus, many applications such as web browsers and database servers that are written using these vulnerable programming languages inherit these shortcomings. There have been numerous security mechanisms that are widely adopted to address this issue but they all fall short in providing complete memory security. Since then, security researchers have proposed various solutions to mitigate these ever-growing shortcomings of memory safety techniques. Nonetheless, these defense techniques are either too narrow-scoped, incur high runtime overhead, or require significant additional hardware resources. This results in them being unscalable for bigger applications or requiring it to be used in combination with other techniques to provide a stronger security guarantee. This thesis presents Data Value Integrity (DVI), a new defense policy that enforces the integrity of "data value" for sensitive C/C++ data which includes, function pointers, virtual function table pointers, and inline heap metadata. DVI can offer wide-scoped security while being able to scale, making it a versatile and elegant solution to address various memory corruption vulnerabilities. This thesis also introduces HyperSpace, a prototype that enforces DVI. The evaluation shows that HyperSpace performs better than state-of-the-art defense mechanisms while having less performance and memory overhead and also providing stronger and more general security guarantees.
|
3 |
Adoption of Big Data And Cloud Computing Technologies for Large Scale Mobile Traffic Analysis / L’adoption des technologies Big Data et Cloud Computing dans le cadre de l’analyse des données de trafic mobileRibot, Stephane 23 September 2016 (has links)
L’émergence des technologies Big Data et Cloud computing pour répondre à l’accroissement constant de la complexité et de la diversité des données constituent un nouvel enjeu de taille pour les entreprises qui, désormais, doivent prendre en compte ce nouveau paradigme. Les opérateurs de services mobiles sont un exemple de sociétés qui cherchent à valoriser et monétiser les données collectées de leur utilisateurs. Cette recherche a pour objectif d’analyser ce nouvel enjeu qui allie d’une part l’explosion du nombre des données à analyser, et d’autre part, la constante émergence de nouvelles technologies et de leur adoption. Dans cette thèse, nous abordons la question de recherche suivante: « Dans quelle mesure les technologies Cloud Computing et Big Data contribuent aux tâches menées par les Data Scientists? » Sur la base d’une approche hypothético-déductive relayée par les théories classiques de l’adoption, les hypothèses et le modèle conceptuel sont inspirés du modèle de l’adéquation de la tâche et de la technologie (TTF) de Goodhue. Les facteurs proposés incluent le Big Data et le Cloud Computing, la tâche, la technologie, l'individu, le TTF, l’utilisation et les impacts réalisés. Cette thèse aborde sept hypothèses qui adressent spécifiquement les faiblesses des modèles précédents. Une enquête a été conduite auprès de 169 chercheurs contribuant à l’analyse des données mobiles. Une analyse quantitative a été effectuée afin de démontrer la validité des mesures effectuées et d’établir la pertinence du modèle théorique proposé. L’analyse partielle des moindres carrés a été utilisée (partial least square) pour établir les corrélations entre les construits. Cette recherche délivre deux contributions majeures : le développement d'un construit (TTF) spécifique aux technologies Big Data et Cloud computing ainsi que la validation de ce construit dans le modèle d’adéquation des technologies Big data - Cloud Computing et de l’analyse des données mobiles. / A new economic paradigm is emerging as a result of enterprises generating and managing increasing amounts of data and looking for technologies like cloud computing and Big Data to improve data-driven decision making and ultimately performance. Mobile service providers are an example of firms that are looking to monetize the collected mobile data. Our thesis explores cloud computing determinants of adoption and Big Data determinants of adoption at the user level. In this thesis, we employ a quantitative research methodology and operationalized using a cross-sectional survey so temporal consistency could be maintained for all the variables. The TTF model was supported by results analyzed using partial least square (PLS) structural equation modeling (SEM), which reflects positive relationships between individual, technology and task factors on TTF for mobile data analysis.Our research makes two contributions: the development of a new TTF construct – task-Big Data/cloud computing technology fit model – and the testing of that construct in a model overcoming the rigidness of the original TTF model by effectively addressing technology through five subconstructs related to technology platform (Big Data) and technology infrastructure (cloud computing intention to use). These findings provide direction to mobile service providers for the implementation of cloud-based Big Data tools in order to enable data-driven decision-making and monetize the output from mobile data traffic analysis.
|
4 |
Data assets in digital firms and ICTs : How data strategy shapes the process of internationalizationBehse, Marc January 2021 (has links)
Digitalized companies are adding complexity to the theory of internationalization. In order to gain momentum in a foreign market, knowledge about specific regional aspects and customers’ behavior is crucial. In a modern business environment, data supports decisions, enhances performance, and contributes to innovative business models. Due to its unique characteristics, data is perceived as a hidden, yet valuable asset. In this thesis, I am comparing the role of data in two types of companies in a qualitative empirical study of German ventures. As a company intern data gathering practice, truly digital firms are expected to take advantage of digital platforms in the context of internationalization. Information and Communication Technology companies are supposed to collect data by enhancing their physical products with Internet of Things applications or -interfaces (Lee and Lee, 2015; Monaghan et al., 2020). I am arguing that the process of internationalization is driven by data, in both types of companies. My results are indicating that digital platforms are the primary method of gathering information about foreign markets. The importance of Internet of Things increases on a subsequent stage, during the process of internationalization. An integral perception of data and its versatile areas of application can create a nourishing ground for business opportunities.
|
5 |
Vysoce výkonné analýzy / High Performance AnalyticsKalický, Andrej January 2013 (has links)
This thesis explains Big Data Phenomenon, which is characterised by rapid growth of volume, variety and velocity of data - information assets, and thrives the paradigm shift in analytical data processing. Thesis aims to provide summary and overview with complete and consistent image about the area of High Performance Analytics (HPA), including problems and challenges on the pioneering state-of-art of advanced analytics. Overview of HPA introduces classification, characteristics and advantages of specific HPA method utilising the various combination of system resources. In the practical part of the thesis the experimental assignment focuses on analytical processing of large dataset using analytical platform from SAS Institute. The experiment demonstrates the convenience and benefits of In-Memory Analytics (specific HPA method) by evaluating the performance of different analytical scenarios and operations. Powered by TCPDF (www.tcpdf.org)
|
6 |
Big Data and AI in Customer Support : A study of Big Data and AI in customer service with a focus on value-creating factors from the employee perspectiveLicina, Aida January 2020 (has links)
The advance of the Internet has resulted in an immensely interconnected world, which produces a tremendous amount of data. It has come to change our daily lives and behaviours tremendously. The trend is especially seen in the field of e-commerce where the customers have started to require more and more from the product and service providers. Moreover, with the rising competition, the companies have to adopt new ways of doing things to keep their position on the market as well as keeping and attracting new customers. One important factor for this is excelling customer service. Today, companies adopt technologies like BDA and AI to enhance and provide excellent customer service. This study aims to investigate how two Swedish cooperations extract value from their customer services with the help of BDA and AI. This study also strives to create an understanding of the expectations, requirements and implications of the technologies from the participants' perspectives that in this case are the employees of these mentioned businesses. Moreover, many fail to see the true potential that the technologies can bring and especially in the field of customer service. This study helps to address these challenges and by pinpointing the ’value- factors’ that companies participating in this study extracts, it might encourage the implementation of digital technologies in the customer service with no regard to the size of the company. This thesis was conducted with a qualitative approach and with semi-structured interviews and systematic observations with two Swedish companies acting on the Chinese market. The findings from the interviews, conducted with these selected companies, present that the companies actively use BDA and AI in their customer service. Moreover, several value-factors are pinpointed in the different stages of customer service. The most reoccurring themes are: ”proactive support”, ”relationship establishment”, ”identifying attitudes and behaviours” and ”real-time support”. Moreover, as for the value-creating factors before and after the actual interaction the reoccurring themes are ”competitive advantage”, ”high-impact customer insights”, ”classification”, ”practicality”, as well as ”reflection and development”. This essay provides knowledge that can help companies to further their understanding of how important customer service along with BDA and AI is and how they can support competitive advantage as well as customer loyalty. Since the thesis only focused on the investigation of Swedish organizations on the Shanghainese market, it would be of interest to continue further research on Swedish companies as China is seen to be in the forefront when it comes to utilizing these technologies.
|
7 |
Examining Opioid-related Overdose Events in Dayton, OH using Police, Emergency Medical Services and Coroner’s DataPan, Yuhan 06 October 2020 (has links)
No description available.
|
8 |
Implementation of Industrial Internet of Things to improve Overall Equipment EffectivenessBjörklöf, Christoffer, Castro, Daniela Andrea January 2022 (has links)
The manufacturing industry is competitive and is constantly striving to improve OEE. In the transition to smart production, digital technologies such as IIoT are highlighted as important. IIoT platforms enable real-time monitoring. In this sense, digital technologies such as IIoT are expected to improve OEE by enabling the analysis of real-time data and production availability. A qualitative study with an abductive approach has been conducted. The empirical material has been collected through a case study of a heavy-duty vehicle industry and the theoretical framework is based on a literature study. Lastly, a thematic analysis has been used for the derivation of appropriate themes for analysis. The study concluded that challenges and enablers related to the implementation of IIoT to improve OEE can be divided into technical and cultural factors. Technical challenges and enablers mainly consider the achievement of interoperability, compatibility, and cyber security, while cultural factors revolve around digital acceptance, competence, encouragement of digital curiosity, and creating knowledge and understanding towards OEE. Lastly, conclusions can be drawn that implementation of IIoT has a positive effect on OEE since it ensures consistent and accurate data, which lies a solid foundation for production decisions. Also, digitalization of production enhances lean practices which are considered a key element for improving OEE.
|
9 |
Information Needs for Water Resource and Risk Management : Hydro-Meteorological Data Value and Non-Traditional Information / Informationsbehov inom vattenförvaltning och riskhantering : Värdet av hårda hydro-meteorologiska data och mjuk informationGirons Lopez, Marc January 2016 (has links)
Data availability is extremely important for water management. Without data it would not be possible to know how much water is available or how often extreme events are likely to occur. The usually available hydro-meteorological data often have a limited representativeness and are affected by errors and uncertainties. Additionally, their collection is resource-intensive and, thus, many areas of the world are severely under-monitored. Other areas are seeing an unprecedented – yet local – wealth of data in the last decades. Additionally, the spread of new technologies together with the integration of different approaches to water management science and practice have uncovered a large amount of soft information that can potentially complement and expand the possibilities of water management. This thesis presents a series of studies that address data opportunities for water management. Firstly, the hydro-meteorological data needs for correctly estimating key processes for water resource management such as precipitation and discharge were evaluated. Secondly, the use of non-traditional sources of information such as social media and human behaviour to improve the efficiency of flood mitigation actions were explored. The results obtained provide guidelines for determining basic hydro-meteorological data needs. For instance, an upper density of 24 rain gauges per 1000 km2 for spatial precipitation estimation beyond which improvements are negligible was found. Additionally, a larger relative value of discharge data respect to precipitation data for calibrating hydrological models was observed. Regarding non-traditional sources of information, social memory of past flooding events was found to be a relevant factor determining the efficiency of flood early warning systems and therefore their damage mitigation potential. Finally, a new methodology to use social media data for probabilistic estimates of flood extent was put forward and shown to achieve results comparable to traditional approaches. This thesis significantly contributes to integrated water management by improving the understanding of data needs and opportunities of new sources of information thus making water management more efficient and useful for society. / All vattenförvaltning kräver tillgång till data. Data behövs för att kunna fastställa t.ex. hur mycket vatten som finns och sannolikheten för stora översvämningar. De hårda hydro-meteorologiska data som normalt är tillgängliga har inte sällan en begränsad representativitet och påverkas av fel och osäkerheter. Dessutom är datainsamling nästan alltid resurskrävande och därför finns hårda data i begränsad utsträckning, eller inte alls, i stora delar av världen. Samtidigt har man under senaste decennierna fått tillgång till en oöverträffad – men lokal – mängd data i vissa områden. Spridningen av ny teknik har, tillsammans med integreringen av olika vetenskapliga metoder inom vattenförvaltning och praktik, påvisat värdet av mjuk information som potentiellt kan komplettera hårda data och förbättra möjligheterna till en god vattenförvaltning. Denna avhandling presenterar studier som belyser vilka möjligheter vattenförvaltningen har vid olika tillgång på hårda data och mjuk information. Först utvärderades vilka krav som kan ställas på hydro-meteorologiska data för att korrekt kunna beskriva nyckelprocesser som nederbörd och vattenföring vid bestämning av vattenresurser. Därefter utforskades möjligheterna att förbättra översvämningsberäkningar med hjälp av mjuk information från sociala medier och rörande mänskligt beteende. Resultaten gav riktlinjer för att bestämma värdet av hydro-meteorologiska data. Till exempel visades att information om nederbördens rumsliga fördelning inte förbättras nämnvärt vid en mätartäthet över 24 regnmätare per 1000 km2. Det relativa värdet av vattenföring visade sig också vara större än för nederbörd för att kalibrera hydrologiska modeller. Det visade sig att en befolknings minne av tidigare översvämningar påverkar effektiviteten hos tidiga varningssystem för översvämningar och deras möjlighet att begränsa skador. Slutligen har en ny metod föreslagits för att använda sociala medier för sannolikhetsberäkningar av översvämmade områden. Metoden visade sig leda till resultat som var jämförbara med traditionella metoder. Avhandlingens huvudsakliga bidrag till en integrerad vattenförvaltning är att öka förståelsen för vilka data som krävs för olika förvaltningsmål och vilka möjligheter som finns att utnyttja mjuk information för att effektivisera vattenförvaltningen göra den mer användbar för samhället. / La disponibilitat de dades és extremadament important per a la gestió de l’aigua. Sense dades no seria possible conèixer ni la quantitat d'aigua disponible ni la freqüència d’esdeveniments extrems. Les dades hidrometeorològiques normalment disponibles tenen una representativitat limitada, es veuen afectades per errors i incerteses i la seva recol•lecció requereix nombrosos recursos, cosa que produeix que moltes zones del món no estiguin adequadament monitoritzades. Així i tot, en les últimes dècades s’ha generat una quantitat de dades sense precedents però de manera localitzada. A més, la difusió de noves tecnologies, juntament amb la integració de diferents enfocaments han permès utilitzar una gran quantitat de dades “toves” per complementar i ampliar les possibilitats en la gestió de l'aigua. Aquesta tesi presenta estudis que aborden algunes de les oportunitats ofertes per les dades per a millorar la gestió de l'aigua. En primer lloc es va avaluar la quantitat necessària de dades hidrometeorològiques per estimar correctament processos clau per a la gestió dels recursos hídrics. En segon lloc es va explorar l'ús de fonts no tradicionals d'informació per millorar l'eficiència de la mitigació d'inundacions. Per exemple, es va identificar una densitat màxima de 24 pluviòmetres per 1000 km2 per estimar la distribució espacial de precipitació, per sobre de la qual les millores són insignificants. A més, es va observar que les dades d’escorrentia tenen un valor relatiu més gran per al calibratge de models hidrològics que les dades de precipitació. Pel que fa a la informació no tradicional, la memòria social d’anteriors inundacions es va identificar com un factor rellevant per a determinar l'eficiència dels sistemes d'alerta d'inundacions i, per tant, del seu potencial per mitigar danys. Finalment, es va proposar una nova metodologia per utilitzar informació provinent de xarxes socials per a realitzar estimacions probabilístiques d'extensió d’inundacions i es va demostrar el seu potencial per aconseguir resultats comparables als d’enfocaments tradicionals. Aquesta tesi aporta avenços significatius per a la gestió integrada de l'aigua mitjançant la millora de la comprensió de les necessitats de dades i de les oportunitats presentades per noves fonts d'informació. D’aquesta manera contribueix a una gestió de l'aigua més eficient i útil per a la societat. / La disponibilidad de datos es sumamente importante para la gestión del agua. Sin datos no sería posible determinar la cantidad de agua disponible o la frecuencia de eventos extremos. Los datos hidrometeorológicos normalmente disponibles tienen una representatividad limitada, son afectados por errores e incertidumbres y su recolección requiere de abundantes recursos, lo cual causa que muchas zonas del mundo no estén adecuadamente monitorizadas. En contraste, en las últimas décadas se ha generado una cantidad de datos sin precedentes pero de manera localizada. Además, la difusión de nuevas tecnologías, conjuntamente con la integración de distintos enfoques, ha permitido usar una gran cantidad de datos “blandos” para complementar y ampliar las posibilidades en la gestión del agua. Esta tesis presenta estudios que abordan las oportunidades ofrecidas por los datos para mejorar la gestión del agua. En primer lugar se evaluó la cantidad necesaria de datos hidrometeorológicos para estimar correctamente procesos clave para la gestión de recursos hídricos. En segundo lugar se exploró el uso de fuentes no tradicionales de información para mejorar la eficiencia de la mitigación de inundaciones. Por ejemplo, se identificó una densidad máxima de 24 pluviómetros por 1000 km2 para estimar la distribución espacial de precipitación más allá de la cual las mejoras son insignificantes. Además se observó que los datos de escorrentía tienen un mayor valor relativo para la calibración de modelos hidrológicos que el de los datos de precipitación. Respecto a la información no tradicional, la memoria social de inundaciones pasadas fue identificada como un factor relevante para determinar la eficiencia de los sistemas de alerta de inundaciones y, por lo tanto, de su potencial para mitigar daños. Finalmente, se propuso una nueva metodología para usar información proveniente de redes sociales para realizar estimaciones probabilísticas de extensión de inundaciones y se demostró su potencial para conseguir resultados comparables a los de enfoques tradicionales. Esta tesis aporta avances significativos para la gestión integral del agua a través de la mejora de la comprensión de las necesidades de datos y de las oportunidades presentadas por nuevas fuentes de información, contribuyendo a una gestión del agua más eficiente y útil para la sociedad.
|
Page generated in 0.0659 seconds