• Refine Query
  • Source
  • Publication year
  • to
  • Language
  • 1042
  • 415
  • 189
  • 111
  • 71
  • 62
  • 44
  • 40
  • 36
  • 18
  • 13
  • 12
  • 11
  • 8
  • 6
  • Tagged with
  • 2396
  • 1249
  • 367
  • 333
  • 249
  • 201
  • 197
  • 188
  • 180
  • 177
  • 174
  • 172
  • 171
  • 171
  • 170
  • About
  • The Global ETD Search service is a free service for researchers to find electronic theses and dissertations. This service is provided by the Networked Digital Library of Theses and Dissertations.
    Our metadata is collected from universities around the world. If you manage a university/consortium/country archive and want to be added, details can be found on the NDLTD website.
161

Big data-driven optimization for performance management in mobile networks

Martinez-Mosquera, Diana 15 November 2021 (has links)
Humanity, since its inception, has been interested in the materialization of knowledge. Various ancient cultures generated a lot of information through their writing systems. The beginning of the increase of information could date back to 1880 when a census performed in the United States of America took 8 years to be tabulated. In the 1930s the demographic growth exacerbated this increase of data. Already in 1940, libraries had collected a large amount of writing and it is in this decade when scientists begin to use the term “information explosion”. The term first appears in the Lawton (Oklahoma) Constitution newspaper in 1941. Currently, it can be said that we live in the age of big data. Exabytes of data are generated every day; therefore, the term big data has become one of the most important concepts in information systems. Big data refer to large amounts of data on a large scale that exceeds the capacity of conventional software to be captured, processed, and stored in a reasonable time. As a general criterion, most experts consider big data to be the largest volume of data, the variety of formats and sources from which it comes, the immense speed at which it is generated, the veracity of its content, and the value of the information extracted/processed. Faced with this reality, several questions arise: How to manipulate this large amount of data? How to obtain important results to gain knowledge from this data? Therefore, the need to create a connecting bridge between big data and wisdom is evident. People, machines, applications, and other elements that make up a complex and constantly evolving ecosystem are involved in this process. Each project presents different peculiarities in the development of an framework based on big data. This, in turn, makes the landscape more complex for the designer since multiple options can be selected for the same purpose. In this work, we focus on an framework for processing mobile network performance management data. In mobile networks, one of the fundamental areas is planning and optimization. This area analyzes the key performance indicators to evaluate the behavior of the network. These indicators are calculated from the raw data sent by the different network elements. The network administration teams, which receive these raw data and process them, use systems that are no longer adequate enough due to the great growth of networks and the emergence of new technologies such as 5G and 6G that also include equipment from the Internet of things. For the aforementioned reasons, we propose in this work a big data framework for processing mobile network performance management data. We have tested our proposal using performance files from real networks. All the processing carried out on the raw data with XML format is detailed and the solution is evaluated in the ingestion and reporting components. This study can help telecommunications vendors to have a reference big data framework to face the current and future challenges in the performance management in mobile networks. For instance, to reduce the processing time data for decisions in many of the activities involved in the daily operation and future network planning.
162

Förutsättningar att bli godkänd vid auktorisationsprovet : Ur ett byrå- och genusperspektiv

Gustafsson, Yasmine, Gustafsson, Agnes January 2022 (has links)
Bakgrund: Godkännandegraden vid auktorisationsprovet utgörs av cirka 50 procent varje år, vilket anses som en bekymmersam siffra trots höga kompetenskrav mot revisorer. Bakom auktorisationsprovet finns en teoretisk och praktisk utbildning om minst sex år. Till följd av en lång studietid ställer vi oss frågan kring varför så många blir underkända. Detta ur ett byrå- och genusperspektiv.  Syfte: Syftet med denna studie är att förklara om faktorerna byråstorlek och genus genererar en påverkan på en revisorsassistents prestation vid revisorsexamen.  Metod: Studien bygger på en deduktiv forskningsansats motsvarande tidigare forskning och teori inom ämnet. I första delstudien genomförs en kvantitativ studie i form av hypotestestning genom statistiska tester. Andra delstudien utgörs som kvalitativ bestående av semistrukturerade intervjuer med auktoriserade revisorer.  Slutsatser: Resultatet från de kvantitativa testerna indikerar på att revisionsbyråer påverkar en revisorsassistents förutsättningar vid auktorisationsprovet. Detta belyses och styrks även ur resultatet från den kvalitativa delstudien. I samband med detta utgör även individuella faktorer en påverkan på en revisorsassistents prestation. Vidare tyder resultatet på att genus inte har en större påverkan på en revisorsassistents prestation och när prov avläggs. Däremot kan inte skillnader i förutsättningar mellan könen uteslutas. / Background: Each year approximately 50 percent pass the authorization test, which is considered a troubled number despite high competence requirements for auditors. Theoretical and practical education of at least six years is required for the authorization test. Although this is a long period of study and preparation, it can be asked the question why so many fail. This question is central for this study and is analyzed from an audit firm and gender perspective.  Purpose: The purpose of this study is to explain whether the auditor firm’s size and gender factors affect an assistant auditor’s performance at the authorization test.  Method: The study is based on a deductive research approach corresponding to previous research and theory within the subject. In the first sub-study a quantitative study is carried out in the form of hypothesis testing through statistical tests. The second sub-study is qualitative, consisting of semi-structured interviews with authorized public accountants.  Conclusions: The results from the statistical tests indicate that the auditing firms affect an assistant auditor’s prerequisites at the authorization test. This is also illustrated and confirmed by the results from the qualitative sub-study. In connection with this, individual factors also have an impact on the performance of an assistant auditor. Furthermore, the results indicate that gender does not have a major impact on an assistant auditor’s performance. However, differences in conditions between the genders cannot be excluded.
163

Big Maritime Data: The promises and perils of the Automatic Identification System : Shipowners and operators’ perceptions

Kouvaras, Andreas January 2022 (has links)
The term Big Data has been gaining importance both at the academic and at the business level. Information technology plays a critical role in shipping since there is a high demand for fast transfer and communication between the parts of a shipping contract. The development of Automatic Identification System (AIS) is intended to improve maritime safety by tracking the vessels and exchange inter-ship information.  This master’s thesis purpose was to a) investigate in which business decisions the Automatic Identification System helps the shipowners and operators (i.e., users), b) find the benefits and perils arisen from its use, and c) investigate the possible improvements based on the users’ perceptions. This master’s thesis is a qualitative study using the interpretivism paradigm. Data were collected through semi-structured interviews. A total of 6 people participated with the following criteria: a) position on technical department or DPA or shipowner, b) participating on business decisions, c) shipping company owns a fleet, and d) deals with AIS data. The Thematic Analysis led to twenty-six codes, twelve categories and five concepts. Empirical findings showed that AIS data mostly contributes to make strategic business decisions. Participants are interested in using AIS data to measure the efficiency of their fleet and ports, to estimate the fuel consumption, to reduce their costs, to protect the environment and people’s health, to analyze the trade market, to predict the time of arrival, the optimal route and speed, to maintain highest security levels and to reduce the inaccuracies due to manual input of some AIS attributes. Finally, participants mentioned some AIS challenges including technological improvement (e.g., transponders, antennas) as well as the operation of autonomous vessels.  Finally, this master’s thesis contributes using the prescriptive and descriptive theories and help stakeholders to find new decisions while researchers and developers to advance their products.
164

Place des mégadonnées et des technologies de l'Intelligence Artificielle dans les activités de communication des petites et moyennes entreprises au Canada

El Didi, Dina 23 November 2022 (has links)
Le développement des mégadonnées et des technologies de l'Intelligence Artificielle a donné naissance à une économie numérique contrôlée par les géants du web (GAFAM). Cette économie témoigne d’une certaine inégalité quant à l'accès et à la gestion des mégadonnées et des technologies de l'Intelligence Artificielle. La présente étude vise à explorer l'inégalité entre les grandes organisations et les petites et moyennes entreprises (PME) au sujet de l'accès et de l'utilisation des mégadonnées et des technologies de l'IA. Pour ce, il s'agit de répondre à la question suivante : « Comment les équipes de communication dans les PME, au Canada, envisagent l'usage et l'importance des mégadonnées et des technologies de l'IA pour leur travail ? » Le cadre théorique mobilisé dans ce travail de recherche est, d’un côté, la sociologie des usages qui aidera à comprendre et à analyser les usages des mégadonnées et des technologies de l'IA par les équipes de communication des PME ; d'un autre côté, l'approche narrative qui permettra de décrire les contextes de pratiques de ces usages. Nous avons eu recours à une méthode mixte. La méthode quantitative, via un questionnaire en ligne, a permis d'identifier la place qu'occupent ces technologies actuellement dans le travail régulier des professionnels de la communication des PME ainsi que les défis qu'ils affrontent pour la mise en place et l'utilisation de ces technologies. La méthode qualitative, via des entrevues semi-dirigées, a servi à mieux comprendre les contextes de pratiques où ces technologies sont utilisées ou pourraient être utilisées. Les résultats ont suggéré qu'il existe un écart entre les PME et les grandes organisations par rapport à l'exploitation et à l'utilisation de ces technologies. Cet écart est dû avant tout à certains défis tels que le manque de connaissances et d'expertise et le manque d'intérêt envers ces technologies. Cette inégalité pourrait être mitigée en mettant en place un plan de formation des gestionnaires afin de garantir des changements au niveau de la culture organisationnelle. Les résultats ont fait émerger l'importance de l'intervention humaine sans laquelle les idées générées par les mégadonnées et les technologies de l'IA risquent d'être biaisées. Ainsi, compte tenu des limites de cette étude exploratoire, elle a permis d'avancer les connaissances en faisant émerger quelques pistes de recherches futures en ce qui concerne les mégadonnées et les technologies de l'IA et leur importance pour les activités de communication dans les PME.
165

A Smart and Interactive Edge-Cloud Big Data System

Stauffer, Jake 08 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / Data and information have increased exponentially in recent years. The promising era of big data is advancing many new practices. One of the emerging big data applications is healthcare. Large quantities of data with varying complexities have been leading to a great need in smart and secure big data systems. Mobile edge, more specifically the smart phone, is a natural source of big data and is ubiquitous in our daily lives. Smartphones offer a variety of sensors, which make them a very valuable source of data that can be used for analysis. Since this data is coming directly from personal phones, that means the generated data is sensitive and must be handled in a smart and secure way. In addition to generating data, it is also important to interact with the big data. Therefore, it is critical to create edge systems that enable users to access their data and ensure that these applications are smart and secure. As the first major contribution of this thesis, we have implemented a mobile edge system, called s2Edge. This edge system leverages Amazon Web Service (AWS) security features and is backed by an AWS cloud system. The implemented mobile application securely logs in, signs up, and signs out users, as well as connects users to the vast amounts of data they generate. With a high interactive capability, the system allows users (like patients) to retrieve and view their data and records, as well as communicate with the cloud users (like physicians). The resulting mobile edge system is promising and is expected to demonstrate the potential of smart and secure big data interaction. The smart and secure transmission and management of the big data on the cloud is essential for healthcare big data, including both patient information and patient measurements. The second major contribution of this thesis is to demonstrate a novel big data cloud system, s2Cloud, which can help enhance healthcare systems to better monitor patients and give doctors critical insights into their patients' health. s2Cloud achieves big data security through secure sign up and log in for the doctors, as well as data transmission protection. The system allows the doctors to manage both patients and their records effectively. The doctors can add and edit the patient and record information through the interactive website. Furthermore, the system supports both real-time and historical modes for big data management. Therefore, the patient measurement information can, not only be visualized and demonstrated in real-time, but also be retrieved for further analysis. The smart website also allows doctors and patients to interact with each other effectively through instantaneous chat. Overall, the proposed s2Cloud system, empowered by smart secure design innovations, has demonstrated the feasibility and potential for healthcare big data applications. This study will further broadly benefit and advance other smart home and world big data applications. / 2023-06-01
166

NON-NUTRITIVE SERIAL VARNISH

Willhoit, Thomas O'Brien 01 May 2023 (has links) (PDF)
Non-Nutritive Serial Varnish is a song written and arranged for big band which explores several stylistic and compositional techniques developed during the 20th century. Serial techniques are used as the structural underpinning during melodic presentation. Eventually these serial structures work their way to the surface to reveal themselves as source material. The salient aesthetic, overarching form and harmony are derived from conventional jazz. The use of semi-improvised melodic modules is also employed.
167

A comparison of work-specific and general personality measures as predictors of OCBs and CWBs in China and the United States

Wang, Qiang 23 September 2011 (has links)
No description available.
168

A cell transmission based assignment-simulation model for integrated freeway/surface street systems

Lee, Sungjoon January 1996 (has links)
No description available.
169

Use of the Traffic Speed Deflectometer for Concrete and Composite Pavement Structural Health Assessment: A Big-Data-Based Approach Towards Concrete and Composite Pavement Management and Rehabilitation

Scavone Lasalle, Martin 23 August 2022 (has links)
The latest trends in highway pavement management aim at implementing a rational, data-driven procedure to allocate resources for pavement maintenance and rehabilitation. To this end, decision-making is based on network-wide surface condition and structural capacity data – preferably collected in a non-destructive manner such as a deflection testing device. This more holistic approach was proven to be more cost-effective than the current state of the art, in which the pavement manager grounds their maintenance and rehabilitation-related decision making on surface distress measurements. However, pavement practitioners still rely mostly on surface distress because traditional deflection measuring devices are not practical for network-level data collection. Traffic-speed deflection devices, among which the Traffic Speed Deflectometer [TSD], allow measuring pavement surface deflections at travel speeds as high as 95 km/h [60 miles per hour], and reporting the said measurements with a spatial resolution as dense as 5cm [2 inches] between consecutive measurements. Since their inception in the early 2000s, and mostly over the past 15 years, numerous research efforts and trial tests focused on the interpretation of the deflection data collected by the TSD, its validity as a field testing device, and its comparability against the staple pavement deflection testing device – the Falling Weight Deflectometer [FWD]. The research efforts have concluded that although different in nature than the FWD, the TSD does furnish valid deflection measurements, from which the pavement structural health can be assessed. Most published TSD-related literature focused on TSD surveys of flexible pavement networks and the estimation of structural health indicators for hot-mix asphalt pavement structures from the resulting data – a sensible approach given that the majority of the US paved road pavement network is asphalt. Meanwhile, concrete and composite pavements (a minority of the US pavement network that yet accounts for nearly half of the US Interstate System) have been mostly neglected in TSD-related research, even though the TSD has been deemed a suitable device for sourcing deflection data from which to infer the structural health of the pavement slabs and the load-carrying joints. Thus, this Dissertation's main objective is to fulfill this gap in knowledge, providing the pavement manager/practitioner with a streamlined, comprehensive interpretation procedure to turn dense TSD deflection measurements collected at a jointed pavement network into characterization parameters and structural health metrics for both the concrete slab system, the sub-grade material, and the load-carrying joints. The proposed TSD data analysis procedure spans over two stages: Data extraction and interpretation. The Data Extraction Stage applies a Lasso-based regularization scheme [Basis Pursuit coupled with Reweighted L1 Minimization] to simultaneously remove the white noise from the TSD deflection measurements and extract the deflection response generated as the TSD travels over the pavement's transverse joints. The examples presented demonstrate that this technique can actually pinpoint the location of structurally weak spots within the pavement network from the network-wide TSD measurements, such as deteriorated transverse joints or segments with early stages of fatigue damage, worthy of further investigation and/or structural overhaul. Meanwhile, the Interpretation Stage implements a linear-elastic jointed-slab-on-ground mathematical model to back-calculate the concrete pavement's and subgrade's stiffness and the transverse joints' load transfer efficiency index [LTE] from the denoised TSD measurements. In this Dissertation, the performance of this back-calculation technique is analyzed with actual TSD data collected at a 5-cm resolution at the MnROAD test track, for which material properties results and FWD-based deflection test results at select transverse joints are available. However, during an early exploratory analysis of the available 5-cm data, a discrepancy between the reported deflection slope and velocity data and simulated measurements was found: The simulated deflection slopes mismatch the observations for measurements collected nearby the transverse joints whereas the measured and simulated deflection velocities are in agreement. Such a finding prompted a revision of the well-known direct relationship between TSD-based deflection velocity and slope data, concluding that it only holds on very specific cases, and that a jointed pavement is a case in which deflection velocity and slope do not correlate directly. As a consequence, the back-calculation approach to the pavement properties and the joints' LTE index was implemented with the TSD's deflection velocity data as input. Validation results of the back-calculation tool using TSD data from the MnROAD low volume road showed a reasonable agreement with the comparison data available while at the same time providing an LTE estimate for all the transverse joints (including those for which FWD-based deflection data is unavailable), suggesting that the proposed data analysis technique is practical for corridor-wide screening. In summary, this Dissertation presents a streamlined TSD data extraction and interpretation technique that can (1) highlight the location of structurally deficient joints within a jointed pavement corridor worthy of further investigation with an FWD and/or localized repair, thus optimizing the time the FWD spends on the road; and 2) reasonably estimate the structural parameters of a concrete pavement structure, its sub-grade, and the transverse joints, thus providing valuable data both for inventory-keeping and rehabilitation management. / Doctor of Philosophy / When allocating funds for network-wide pavement maintenance, such as the State or Country level, the engineer relies on as much pavement condition data as possible to optimally assign the most suitable maintenance or rehabilitation treatment to each pavement segment. Currently, practitioners rely mostly on surface condition data to decide on how to maintain their roads, as this data can be collected fast and easily with automated vehicle-mounted equipment and analyzed by computer software. However, managerial decisions based solely on surface condition data do not optimally make use of the Agency resources, for they do not precisely account for the pavements' structural capacity when assigning maintenance solutions. As such, the manager may allocate a surface treatment on a structurally weak segment with a poor surface which will be prone to an early failure (thus wasting the investment) or, conversely, reconstruct a deteriorated yet strong segment that could be fixed with a surface treatment. The reason for such a sub-optimal managerial practice has been the lack of a commercially-available pavement testing device capable of producing structural health data at a similar rate as the existing surface scanning equipment – pavement engineers could only appeal to crawling-speed or stop-and-go deflection devices to gather such data, which are fit for project-level applications but totally unsuitable for routine network-wide surveying. Yet, this trend reverted in the early 2000s with the launch of the Traffic Speed Deflectometer [TSD], a device capable of getting dense pavement deflection measurements (spaced as close as 5cm [2 inches] between each other) while traveling at speeds higher than 50 mph. Following the device's release, numerous research activities studied its feasibility as a network-wide routine data collection device and developed analysis schemes to interpret the collected measurements into pavement structural condition information. This research effort is still ongoing, the Transportation Pooled Fund [TPF] Project 5(385) is aimed in that direction, and set the goal of furnishing standards on the acquisition, storage, and interpretation of TSD data for pavement management. This being said, data collection and analysis protocols should be drafted to interpret the data gathered by the TSD on flexible and rigid pavements. Concerning TSD-based evaluation of flexible asphalt pavements, abundant published literature discussing exists; whereas TSD surveying of concrete and composite (concrete + asphalt) pavements has been off the center of attention, partly because these pavements constitute only a minority of the US paved highway network – even though they account for roughly half of the Interstate system. Yet, the TSD has been found suitable to provide valuable structural health information concerning both the pavement slabs and the load-bearing joints, the weakest element of such structures. With this in mind, this Dissertation research is aimed at bridging this existing gap in knowledge: a streamlined analysis methodology is proposed to process the TSD deflection data collected while surveying a jointed rigid pavement and derive important structural health metrics for the manager to drive their decision-making. Broadly speaking, this analysis methodology is constituted by two main elements: • The Data Extraction stage, in which the TSD deflection data is mined to both clear it from measurement noise and extract meaningful features, such as the pulse responses generated as the TSD travels over the pavement joints. • The Interpretation stage, which is more pavement engineering-related. Herein, the filtered TSD measurements are utilized to fit a pavement response model so that the pavement structural parameters (its stiffness, the strength of the sub-grade soil, and the joints' structural health) can be inferred. This Dissertation spans both the mathematical grounds for these analysis techniques, validation tests on computer-generated data, and experiments done with actual TSD data to test their applicability. The ultimate intention is for these techniques to eventually be adopted in practice as routine analysis of the TSD data for a more rational and resource-wise pavement management.
170

Screening and Engineering Phenotypes using Big Data Systems Biology

Huttanus, Herbert M. 20 September 2019 (has links)
Biological systems display remarkable complexity that is not properly accounted for in small, reductionistic models. Increasingly, big data approaches using genomics, proteomics, metabolomics etc. are being applied to predicting and modifying the emergent phenotypes produced by complex biological systems. In this research, several novel tools were developed to assist in the acquisition and analysis of biological big data for a variety of applications. In total, two entirely new tools were created and a third, relatively new method, was evaluated by applying it to questions of clinical importance. 1) To assist in the quantification of metabolites at the subcellular level, a strategy for localized in-vivo enzymatic assays was proposed. A proof of concept for this strategy was conducted in which the local availability of acetyl-CoA in the peroxisomes of yeast was quantified by the production of polyhydroxybutyrate (PHB) using three heterologous enzymes. The resulting assay demonstrated the differences in acetyl-CoA availability in the peroxisomes under various culture conditions and genetic alterations. 2) To assist in the design of genetically modified microbe strains that are stable over many generations, software was developed to automate the selection of gene knockouts that would result in coupling cellular growth with production of a desired chemical. This software, called OptQuick, provides advantages over contemporary software for the same purpose. OptQuick can run considerably faster and uses a free optimization solver, GLPK. Knockout strategies generated by OptQuick were compared to case studies of similar strategies produced by contemporary programs. In these comparisons, OptQuick found many of the same gene targets for knockout. 3) To provide an inexpensive and non-invasive alternative for bladder cancer screening, Raman-based urinalysis was performed on clinical urine samples using RametrixTM software. RametrixTM has been previously developed and employed to other urinalysis applications, but this study was the first instance of applying this new technology to bladder cancer screening. Using a pool of 17 bladder cancer positive urine samples and 39 clinical samples exhibiting a range of healthy or other genitourinary disease phenotypes, RametrixTM was able to detect bladder cancer with a sensitivity of 94% and a specificity of 54%. 4) Methods for urine sample preservation were tested with regard to their effect on subsequent analysis with RametrixTM. Specifically, sterile filtration was tested as a potential method for extending the duration at which samples may be kept at room temperature prior to Raman analysis. Sterile filtration was shown to alter the chemical profile initially, but did not prevent further shifts in chemical profile over time. In spite of this, both unfiltered and filtered urine samples alike could be used for screening for chronic kidney disease or bladder cancer even after being stored for 2 weeks at room temperature, making sterile filtration largely unnecessary. / Doctor of Philosophy / Biological systems display remarkable complexity that is not properly accounted for in conventional, reductionistic models. Thus, there is a growing trend in biological studies to use computational analysis on large databases of information such as genomes containing thousands of genes or chemical profiles containing thousands of metabolites in a single cell. In this research, several new tools were developed to assist with gathering and processing large biological datasets. In total, two entirely new tools were created and a third, relatively new method, was evaluated by applying it to questions of medical importance. The first two tools are for bioengineering applications. Bioengineers often want to understand the complex chemical network of a cell’s metabolism and, ultimately, alter that network so as to force the cell to make more of a desired chemical like a biofuel or medicine. The first tool discussed in this dissertation offers a way to measure the concentration of key chemicals within a cell. Unlike previous methods for measuring these concentrations, however, this method limits its search to a specific compartment within the cell, which is important to many bioengineering strategies. The second technology discussed in this paper uses computer simulations of the cells entire metabolism to determine what genetic alterations might lead to better produce a chemical of interest. The third tool involves analyzing the chemical makeup of urine samples to screen for diseases such as bladder cancer. Two studies were conducted with this third tool. The first study shows that Raman spectroscopy can distinguish between bladder cancer and related diseases. The second v study addresses whether sterilizing the urine samples through filtration is necessary to preserve the samples for analysis. It was found that filtration was neither beneficial nor necessary.

Page generated in 0.0274 seconds