621 |
The impact of technology acceptance and openess to innovaion on software implementationBertini, Michael Marin 01 January 2016 (has links)
Senior management decisions to foster innovation and adopt new technology solutions have serious implications for the success of their organization change initiatives. This project examined the issue of senior management decision or reasons of their decision to adopt new Enterprise Resource Planning (ERP) systems as a solution to solve their business problems. This project investigated the degree that perceived ease of use and usefulness of the ERP system influenced decisions made by senior managers to innovate. Roger's diffusion of innovations theory and Davis technology acceptance model theory were used to predict when senior managers were open to innovation, and whether senior managers made decisions to adopt new technological innovations. Out of the 3,000 randomly selected senior managers of small to medium sized organizations in the United States who were invited via emails to participate, 154 completed the online survey. Binary logistic regression analysis on the collected data failed to produce statistically significant support for the claim that perceived ease of use, perceived usefulness, and openness to innovation should impact the senior manager's decision to innovate. The conclusions of this study suggest further research may include a qualitative design to gain a deeper understanding of the underlying reasons, opinions and motivations on the emotive aspects of the decision-making process in the adoption of ERP software innovations. It also offers a positive social change to stakeholders who are potentially affected by technology innovation and adoption by providing empirically validated evidence for causes of senior management technology decisions.
|
622 |
3D Image Segmentation Implementation on FPGA Using EM/MPM AlgorithmSun, Yan 12 1900 (has links)
Indiana University-Purdue University Indianapolis (IUPUI) / In this thesis, 3D image segmentation is targeted to a Xilinx Field Programmable Gate Array (FPGA), and verified with extensive simulation. Segmentation is performed using the Expectation-Maximization with Maximization of the Posterior Marginals (EM/MPM) Bayesian algorithm. This algorithm segments the 3D image using neighboring pixels based on a Markov Random Field (MRF) model. This iterative algorithm is designed, synthesized and simulated for the Xilinx FPGA, and greater than 100 times speed improvement over standard desktop computer hardware is achieved. Three new techniques were the key to achieving this speed: Pipelined computational cores, sixteen parallel data paths and a novel memory interface for maximizing the external memory bandwidth. Seven MPM segmentation iterations are matched to the external memory bandwidth required of a single source file read, and a single segmented file write, plus a small amount of latency.
|
623 |
The investment decision process of real estate owners : How to determine property uses in development projects / Fastighetsägares beslutsprocess : Hur bestäms verksamhetstyper i utvecklingsprojekt?Hjelte Jonasson, Amanda, Prick, Cecilia January 2018 (has links)
Mixed-use developments have shown to have positive effects on areas’ attractiveness and have thus turned into a planning principle in Swedish urban areas. To ensure that a mix of property uses is obtained and that a sufficient amount of housing is built, many municipalities use constraints in the detailed development plans. Despite the many positive aspects of mixed-use developments there are also challenges. Real estate owners are the long-term investors and which projects and property uses they choose to develop are a matter of risk. Real estate development is characterized by complexity and uncertainty where the end product should result in leasable space over time. In order to succeed, a real estate owner needs to make correct forecasts of future demand and supply for the different property uses. The property owner also needs to manage risk related to detailed development plans, permits, flexibility in the design, construction and lease. The aim of this study is to explore how property owners decide which property uses to include in development projects. The objective is to identify the most important factors behind the decision and to contribute to the body of knowledge concerning the investment decision process of property uses in development projects. The study uses a qualitative method with an abductive approach where semi-structured interviews with 11 of the largest real estate owners in Stockholm have been conducted. The information from the interviews resulted in a general description of the process real estate owners undertake to decide which property uses to include in development projects. The most important factors behind the decision were shown to be the demand in the area, the will of the municipality, the preconditions of the site and the profitability analysis of the project. The detailed development plan, controlled by the municipality, is what ultimately regulates which property uses that can be developed. Real estate owners can negotiate with the municipality when new plans are developed over which property uses to be included, but in the end it is the municipality who has the final say in the matter. / Att kombinera olika verksamhetstyper i utvecklingsprojekt har visat sig ha flera positiva effekter på områdets attraktivitet och har därmed blivit en allt mer vanligt förekommande princip inom stadsplanering. För att säkerställa att en blandning av verksamhetstyper erhålls och att tillräckligt med bostäder byggs reglerar därför många kommuner användningen av kvartersmarken i detaljplaner. Trots de många positiva aspekterna med att kombinera verksamhetstyper i utvecklingsprojekt finns den även utmaningar. Fastighetsägare är långsiktiga investerare och vilka projekt och verksamhetstyper de utvecklar är en fråga om risk. Fastighetsutveckling präglas av komplexitet och ovisshet där den färdiga produkten ska resultera i uthyrningsbar yta över tid. För att lyckas måste den framtida efterfrågan och utbud för de olika verksamhetstyperna prognostiseras. Fastighetsägaren behöver även hantera risker relaterade till detaljplaner, tillstånd, flexibilitet i design, konstruktion och uthyrning. Syftet med studien är att undersöka hur fastighetsägare går tillväga för att komma fram till vilka verksamhetstyper som ett utvecklingsprojekt ska innehålla. Målet med arbetet är att identifiera de viktigaste faktorerna som ligger bakom beslutet och att bidra till den samlade kunskapen om hur beslutsprocessen ser ut vid val av verksamhetstyper i utvecklingsprojekt. Studien har använt en kvalitativ metod med en abduktiv ansats där halvstrukturerade intervjuer har genomförts med 11 av de största fastighetsägarna i Stockholm. Informationen från intervjuerna resulterade i en generell beskrivning av fastighetsägares beslutsprocess för att komma fram till vilka verksamhetstyper ett utvecklingsprojekt ska innehålla. De viktigaste faktorerna bakom beslutet visade sig vara efterfrågan i området, kommunens vilja, platsens förutsättningar och projektets lönsamhetsanalys. Det är detaljplanen som styr vilka verksamhetstyper en utvecklingsprojekt ska innehålla, vilken regleras av kommunen. Fastighetsägarna har möjlighet att komma med förslag på verksamhetstyper vid en detaljplaneprocess men det är i slutändan kommunen som har den sista talan i frågan.
|
624 |
Macroscopic Crash Analysis and Its Implications for Transportation Safety PlanningSiddiqui, Chowdhury Kawsar 01 January 2012 (has links)
Incorporating safety into the transportation planning stage, which is often termed as transportation safety planning (TSP), relies on the vital interplay between zone characteristics and zonal traffic crashes. Although a few safety studies had made some effort towards integrating safety and planning, several unresolved problems and a complete framework of TSP are still absent in the literature. This research aims at examining the suitability of the current traffic-related zoning planning process in a new suggested planning method which incorporates safety measures. In order to accomplish this broader research goal, the study defined its research objectives in the following directions towards establishing a framework of TSP- i) exploring the existing key determinants in traditional transportation planning (e.g., trip generation/distribution data, land use types, demographics, etc.) in order to develop an effective and efficient TSP framework, ii) investigation of the Modifiable Aerial Unit Problem (MAUP) in the context of macro-level crash modeling to investigate the effect of the zone's size and boundary, iii) understanding neighborhood influence of the crashes at or near zonal boundaries, and iv) development of crash-specific safety measure in the four-step transportation planning process. This research was conducted using spatial data from the counties of West Central Florida. Analysis of different crash data per spatial unit was performed using nonparametric approaches (e.g., data mining and random forest), classical statistical methods (e.g., negative binomial models), and Bayesian statistical techniques. In addition, a comprehensive Geographic Information System (GIS) based application tools were utilized for spatial data analysis and representation. Exploring the significant variables related to specific types of crashes is vital in the planning stages of a transportation network. This study identified and examined important variables associated with total crashes and severe crashes per traffic analysis zone (TAZ) by applying nonparametric statistical techniques using different trip related variables and road-traffic related factors. Since a macro-level analysis, by definition, will necessarily involve aggregating crashes per spatial unit, a spatial dependence or autocorrelation may arise if a particular variable of a geographic region is affected by the same variable of the neighboring regions. So far, few safety studies were performed to examine crashes at TAZs and none of them explicitly considered spatial effect of crashes occurring in them. In order to understand the clear picture of spatial autocorrelation of crashes, this study investigated the effect of spatial autocorrelation in modeling pedestrian and bicycle crashes in TAZs. Additionally, this study examined pedestrian crashes at Environmental Justice (EJ) TAZs which were identified in compliance with the various ongoing practices undertaken by Metropolitan Planning Organizations (MPOs) and previous research. Minority population and the low-income group are two important criteria based on which EJ areas are being identified. These unique areal characteristics have been of particular interest to the traffic safety analysts in order to investigate the contributing factors of pedestrian crashes in these deprived areas. Pedestrian and bicycle crashes were estimated as a function of variables related to roadway characteristics, and various demographic and socio-economic factors. It was found that significant differences are present between the predictor sets for pedestrian and bicycle crashes. In all cases the models with spatial correlation performed better than the models that did not account for spatial correlation among TAZs. This finding implied that spatial correlation should be considered while modeling pedestrian and bicycle crashes at the aggregate or macro-level. Also, the significance of spatial autocorrelation was later found in the total and severe crash analyses and accounted for in their respective modeling techniques. Since the study found affirmative evidence about the inclusion of spatial autocorrelation in the safety performance functions, this research considered identifying appropriate spatial entity based on which TSP framework would be developed. A wide array of spatial units has been explored in macro-level crash modeling in previous safety research. With the advancement of GIS, safety analysts are able to analyze crashes for various geographical units. However, a clear guideline on which geographic entity should a modeler choose is not present so far. This preference of spatial unit can vary with the dependent variable of the model. Or, for a specific dependent variable, models may be invariant to multiple spatial units by producing a similar goodness-of-fits. This problem is closely related to the Modifiable Areal Unit Problem which is a common issue in spatial data analysis. The study investigated three different crash (total, severe, and pedestrian) models developed for TAZs, block groups (BGs) and census tracts (CTs) using various roadway characteristics and census variables (e.g., land use, socio-economic, etc.); and compared them based on multiple goodness-of-fit measures. Based on MAD and MSPE it was evident that the total, severe and pedestrian crash models for TAZs and BGs had similar fits, and better than the ones developed for CTs. This indicated that the total, severe and pedestrian crash models are being affected by the size of the spatial units rather than their zoning configurations. So far, TAZs have been the base spatial units of analyses for developing travel demand models. Metropolitan planning organizations widely use TAZs in developing their long range transportation plans (LRTPs). Therefore, considering the practical application it was concluded that as a geographical unit, TAZs had a relative ascendancy over block group and census tract. Once TAZs were selected as the base spatial unit of the TSP framework, careful inspections on the TAZ delineations were performed. Traffic analysis zones are often delineated by the existing street network. This may result in considerable number of crashes on or near zonal boundaries. While the traditional macro-level crash modeling approach assigns zonal attributes to all crashes that occur within the zonal boundary, this research acknowledged the inaccuracy resulting from relating crashes on or near the boundary of the zone to merely the attributes of that zone. A novel approach was proposed to account for the spatial influence of the neighboring zones on crashes which specifically occur on or near the zonal boundaries. Predictive model for pedestrian crashes per zone were developed using a hierarchical Bayesian framework and utilized separate predictor sets for boundary and interior (non-boundary) crashes. It was found that these models (that account for boundary and interior crashes separately) had better goodness-of-fit measures compared to the models which had no specific consideration for crashes located at/near the zone boundaries. Additionally, the models were able to capture some unique predictors associated explicitly with interior and boundary-related crashes. For example, the variables- 'total roadway length with 35mph posted speed limit' and 'long term parking cost' were statistically not significantly different from zero in the interior crash model but they were significantly different from zero at the 95% level in the boundary crash model. Although an adjacent traffic analysis zones (a single layer) were defined for pedestrian crashes and boundary pedestrian crashes were modeled based on the characteristic factors of these adjacent zones, this was not considered reasonable for bicycle-related crashes as the average roaming area of bicyclists are usually greater than that of pedestrians. For smaller TAZs sometimes it is possible for a bicyclist to cross the entire TAZ. To account for this greater area of coverage, boundary bicycle crashes were modeled based on two layers of adjacent zones. As observed from the goodness-of-fit measures, performances of model considering single layer variables and model considering two layer variables were superior from the models that did not consider layering at all; but these models were comparable. Motor vehicle crashes (total and severe crashes) were classified as 'on-system' and 'off-system' crashes and two sub-models were fitted in order to calibrate the safety performance function for these crashes. On-system and off-system roads refer to two different roadway hierarchies. On-system or state maintained roads typically possess higher speed limit and carries traffic from distant TAZs. Off-system roads are, however, mostly local roads with relatively low speed limits. Due to these distinct characteristics, on-system crashes were modeled with only population and total employment variables of a zone in addition to the roadway and traffic variables; and all other zonal variables were disregarded. For off-system crashes, on contrary, all zonal variables was considered. It was evident by comparing this on- and off-system sub-model-framework to the other candidate models that it provided superior goodness-of-fit for both total and severe crashes. Based on the safety performance functions developed for pedestrian, bicycle, total and severe crashes, the study proposed a novel and complete framework for assessing safety (of these crash types) simultaneously in parallel with the four-step transportation planning process with no need of any additional data requirements from the practitioners' side.
|
625 |
Selective Multivariate Applications In Forensic ScienceRinke, Caitlin 01 January 2012 (has links)
A 2009 report published by the National Research Council addressed the need for improvements in the field of forensic science. In the report emphasis was placed on the need for more rigorous scientific analysis within many forensic science disciplines and for established limitations and determination of error rates from statistical analysis. This research focused on multivariate statistical techniques for the analysis of spectral data obtained for multiple forensic applications which include samples from: automobile float glasses and paints, bones, metal transfers, ignitable liquids and fire debris, and organic compounds including explosives. The statistical techniques were used for two types of data analysis: classification and discrimination. Statistical methods including linear discriminant analysis and a novel soft classification method were used to provide classification of forensic samples based on a compiled library. The novel soft classification method combined three statistical steps: Principal Component Analysis (PCA), Target Factor Analysis (TFA), and Bayesian Decision Theory (BDT) to provide classification based on posterior probabilities of class membership. The posterior probabilities provide a statistical probability of classification which can aid a forensic analyst in reaching a conclusion. The second analytical approach applied nonparametric methods to provide the means for discrimination between samples. Nonparametric methods are performed as hypothesis test and do not assume normal distribution of the analytical figures of merit. The nonparametric iv permutation test was applied to forensic applications to determine the similarity between two samples and provide discrimination rates. Both the classification method and discrimination method were applied to data acquired from multiple instrumental methods. The instrumental methods included: Laser Induced-Breakdown Spectroscopy (LIBS), Fourier Transform Infrared Spectroscopy (FTIR), Raman spectroscopy, and Gas Chromatography-Mass Spectrometry (GC-MS). Some of these instrumental methods are currently applied to forensic applications, such as GC-MS for the analysis of ignitable liquid and fire debris samples; while others provide new instrumental methods to areas within forensic science which currently lack instrumental analysis techniques, such as LIBS for the analysis of metal transfers. The combination of the instrumental techniques and multivariate statistical techniques is investigated in new approaches to forensic applications in this research to assist in improving the field of forensic science.
|
626 |
Impacts of Supply Chain disruptions due to Covid-19 on Strategies of Textile Industries : Case Study on Bangladesh Textile industryHussain, Muqadas, Bappy, MD Forhad Hossain January 2022 (has links)
Abstract Purpose of research The Covid-19 was one of the catastrophic pandemics of the modern times which impacted the supply chain of businesses a lot. Textile industries were the one that got hit hard by the whole situation. During the crisis, one of the biggest exporters of textile i.e., Bangladesh was able to withstand the major impacts. This study explores what were the major decisions and strategies made by organizations to minimize these impacts. Methodology The study uses the behavioral decision theory, which is one of the qualitative research methodologies, The theory is providing us with the basic knowledge of how human reacts under certain conditions and availability of certain set of data.The behavioral decision theory becomes completely relevant because the decision was mainly affected by the fear of the pandemic. About fifteen interviews were taken from different company’s management personals, and their decision making were analyzed. Findings The major finding was that Covid-19 heavily influenced the decision making and the strategy of the managerial departments of the textile industries, which came not only in shape of exploring new destinations for imports, but also resulted salary deduction, firing of employees, and at some places improvement in overall working conditions. Where the change in behavioral decision can easily be seen. Originality The Study undertakes the Behavioral decision theory to analyze the how Bangladesh textile sector withstood the impacts of the Covid-19 and how the decisions of the management were affected by this. The study fills in the literature gap of presenting an actual on ground data, that was taken right from the source via interviews. This study would also serve as a guideline to textile companies if another pandemic or recession were to happen. / <p>The presentation was held on september 26, 2022. It was held online over zoom.</p>
|
627 |
Causal Inference for Health Effects of Time-varying Correlated Environmental MixturesChai, Zilan January 2023 (has links)
Exposure to environmental chemicals has been shown to affect health status throughout the life course. Quantifying the joint effect of environmental mixtures over time is crucial to determine optimal intervention timing. Establishing causal relationships from environmental mixture data can be challenging due to various factors, including multicollinearity, complex functional form of exposure-response relationships, and residual unmeasured confounding. These issues can lead to biased estimates of treatment effects and pose significant obstacles in accurately identifying the true relationship between the pollutants and outcome variables. Causal interpretation of longitudinal environmental mixture studies encounters challenges.
This dissertation explores the use of causal inference in environmental mixture studies, with a particular emphasis on addressing three key challenges. First, there is currently no statistical approach that allows simultaneous consideration of time-varying confounding, flexible modeling, and variable selection when examining the effect of multiple, correlated, and time-varying exposures. Second, the violation of a critical assumption that underpins all causal inference methods - namely, the absence of unmeasured confounding - poses a significant problem, as models that incorporate multiple environmental exposures may exacerbate the degree of bias depending on the nature of unmeasured confounding. Finally, there is a lack of computational resources that facilitate the application of newly developed causal inference methods for analyzing environmental mixtures.
In Chapter 2, we introduce a causal inference method, g-BKMR, which enables to estimate nonlinear, non-additive effects of time-varying exposures and time-varying confounders, while also allowing for variable selection. An extensive simulation study shows that g-BKMR outperforms approaches that rely on correct model specification or do not account for time-dependent confounding, especially when correlation across time-varying exposures is high or the exposure-outcome relationship is nonlinear. We apply g-BKMR to quantify the contribution of metal mixtures to blood glucose in the Strong Heart Study, a prospective cohort study of American Indians.
Chapter 3, we address the issue of time-varying unmeasured confounding when estimating time-varying effects of exposure to environmental chemicals. We review the Bayesian g-formula under the assumption of no unmeasured confounding, and then introduce a Bayesian probabilistic sensitivity analysis approach that can account for multiple, potentially time-varying, unmeasured confounders and continuous exposures. Through a simulation study, we demonstrate that the proposed algorithm outperforms the naive method, which fails to consider the influence of confounding.
Chapter 4, introduces causalbkmr, a novel R package and can be currently be accessed on Github. causalbkmr is designed to support the implementation of g-BKMR, BKMR Causal Mediation Analysis, and Multiple Imputation BKMR, thereby offering a user-friendly and effective platform for executing these state-of-the-art methods in practice in the context of complex mixtures analysis. While the package bkmr is available, the novel package causalbkmr expands upon bkmr by enabling its application specifically to environmental mixture data within a causal inference framework. The implementation of these novel methodologies within causalbkmr allows for the extraction of causal interpretations, thus enhancing the analytical capabilities provided by the package.
Chapter 5 concludes with a discussion and outlines potential future directions for investigation.
|
628 |
How We Know When We Don't Know Enough: Neural Representations of Probabilistic Inference and Information DemandSingletary, Nicholas Martin January 2023 (has links)
In real-world settings, decision-making typically resembles a stepwise process in which one decides which information to sample before deciding to which decision option to commit. The former step is called instrumental information-seeking, and theoretical and empirical findings indicate that it is mediated by the value of information (VOI), the extent to which obtaining information increases the expected value of future actions and decisions. Economic theory predicts that to estimate VOI, decision-makers conduct a preposterior analysis in which they prospect what they would expect to know about the decision options after observing the information—or, in terms of Bayesian inference, they should prospect the future posterior probabilities. But the neural mechanisms underlying this early step of the computation of VOI remain an open question.
Therefore, to further investigate the neural substrates of instrumental information-seeking, we used functional magnetic resonance imaging (fMRI) in conjunction with two interrelated behavioral tasks in humans. With one task, we examined the demand for instrumental information, but since preposterior analysis relies on the prospection of potential future posterior beliefs, we included another task to examine how people form posterior beliefs after receiving information. We found that regions of posterior parietal cortex and occipital fusiform gyrus appear to support a preposterior analysis through the prospection of expected posterior certainty. This aligned with our finding of a region of parieto-occipital cortex that appears to support Bayesian inference by integrating the prior probability of a hypothesis with the likelihood of observed information. These results imply that parietal cortex plays a key role in Bayesian inference, supporting preposterior analysis during information-seeking in addition to Bayesian inference during categorical decision-making.
|
629 |
Modeling Nonignorable Missingness with Response Times Using Tree-based Framework in Cognitive Diagnostic ModelsYang, Yi January 2023 (has links)
As the testing moves from paper-and-pencil to computer-based assessment, both response accuracy (RA) and response time (RT) together provide a potential for improving the performance evaluation and ability estimation of the test takers. Most joint models utilizing RAs and RTs simultaneously assumed an IRT model for the RA measurement at the lower level, among which the hierarchical speed-accuracy (SA) model proposed by van der Linden (2007) is the most prevalent in literature.
Zhan et al. (2017) extended the SA model in cognitive diagnostic modeling (CDM) by proposing the hierarchical joint response and times DINA (JRT-DINA) model, but little is known about its generalizability with the presence of missing data. Large-scale assessments are used in educational effectiveness studies to quantify educational achievement, in which the amount of item nonresponses is not negligible (Pohl et al., 2012; Pohl et al., 2019; Rose et al., 2017; Rose et al., 2010) due to lack of proficiency, lack of motivation and/or lack of time.
Treating unplanned missingness as ignorable leads to biased sample-based estimates of item and person parameters (R. J. A. Little & Rubin, 2020; Rubin, 1976), therefore, in the past few decades, intensive efforts have been focused on nonignorable missingness (Glas & Pimentel, 2008; Holman & Glas, 2005; Pohl et al., 2019; Rose et al., 2017; Rose et al., 2010; Ulitzsch et al., 2020a, 2020b). However, a great majority of these methods were limited in item nonresponse types and/or model complexity until J. Lu and Wang (2020) incorporated the mixture cure-rate model (Lee & Ying, 2015) and the tree-based IRT framework (Debeer et al., 2017), which inherited a built-in behavior process for item nonresponses thus introduced no additional latent propensity parameters to the joint model. Nevertheless, these approaches were discussed within the IRT framework, and the traditional measurement models could not provide cognitive diagnostic information about attribute mastery.
This dissertation first postulates the CDMTree model, an extension of the tree-based RT process model in CDM, and then explores its efficacy through a real data analysis using PISA 2012 computer-based assessment of mathematics data. The follow-up simulation study compares the proposed model to the JRT-DINA model under multiple conditions to deal with various types of nonignorable missingness, i.e. both omitted items (OIs) and not-reached items (NRIs) due to time limits. A fully Bayesian approach is used for the estimation of the model with the Markov chain Monte Carlo (MCMC) method.
|
630 |
Training for Vigilance: Effects on Performance Diagnosticity, Stress, and CopingHAUSEN, MICHELLE JENNIFER 22 September 2008 (has links)
No description available.
|
Page generated in 0.0973 seconds