Spelling suggestions: "subject:"inference"" "subject:"lnference""
371 |
Feasibility Study in Development of a Wearable Device to Enable Emotion Regulation in Children with Autism Spectrum DisorderHora, Manpreet Kaur 17 September 2014 (has links)
Autism spectrum disorder (ASD) is a group of developmental disabilities characterized by impairments in social interaction and communication and by difficulties in emotion recognition and regulation. There is currently no cure for autism but psychosocial interventions and medical treatments exist. However, very few of them have been trialed on young children and others pose limitations. Strengthening young children's capacity to manage their emotions is important for academic success. Thus it becomes important to design and test the feasibility of an appropriate methodology that can teach emotion regulation to young children (age 3-6 years) with ASD. This thesis addresses the problem by proposing a novel framework that integrates physiology with Cognitive Behavior Theory to enable emotion regulation in the target population by exposing them to real-time stressful situations. The framework uses a feedback loop that measures the participant's physiology, estimates the level of stress being experienced and provides an audio feedback. The feasibility of the individual building blocks of the framework was tested by conducting pilot studies on nine typically developing children (age 3-6 years). The attention capturing capacity of different audio representations was tested, and a stress profile generating system was designed and developed to map the measured physiology of the participant on to a relative stress level. 33 out of 43 instances of audio representations proved to be successful in capturing the participants' attention and the stress profiles were found to be capable of distinguishing between stressed and relaxed state of the participants with an average accuracy of 83%. / Master of Science
|
372 |
Incorporating Obfuscation Techniques in Privacy Preserving Database-Driven Dynamic Spectrum Access SystemsZabransky, Douglas Milton 11 September 2018 (has links)
Modern innovation is a driving force behind increased spectrum crowding. Several studies performed by the National Telecommunications and Information Administration (NTIA), Federal Communications Commission (FCC), and other groups have proposed Dynamic Spectrum Access (DSA) as a promising solution to alleviate spectrum crowding. The spectrum assignment decisions in DSA will be made by a centralized entity referred to as as spectrum access system (SAS); however, maintaining spectrum utilization information in SAS presents privacy risks, as sensitive Incumbent User (IU) operation parameters are required to be stored by SAS in order to perform spectrum assignments properly. These sensitive operation parameters may potentially be compromised if SAS is the target of a cyber attack or an inference attack executed by a secondary user (SU).
In this thesis, we explore the operational security of IUs in SAS-based DSA systems and propose a novel privacy-preserving SAS-based DSA framework, Suspicion Zone SAS (SZ-SAS), the first such framework which protects against both the scenario of inference attacks in an area with sparsely distributed IUs and the scenario of untrusted or compromised SAS. We then define modifications to the SU inference attack algorithm, which demonstrate the necessity of applying obfuscation to SU query responses. Finally, we evaluate obfuscation schemes which are compatible with SZ-SAS, verifying the effectiveness of such schemes in preventing an SU inference attack. Our results show SZ-SAS is capable of utilizing compatible obfuscation schemes to prevent the SU inference attack, while operating using only homomorphically encrypted IU operation parameters. / Master of Science / Dynamic Spectrum Access (DSA) allows users to opportunistically access spectrum resources which were previously reserved for use by specified parties. This spectrum sharing protocol has been identified as a potential solution to the issue of spectrum crowding. This sharing will be accomplished through the use of a centralized server, known as a spectrum access system (SAS). However, current SAS-based DSA proposals require users to submit information such as location and transmission properties to SAS. The privacy of these users is of the utmost importance, as many existing users in these spectrum bands are military radars and other users for which operational security is pivotal. Storing the information for these users in a central database can be an major privacy issue, as this information could be leaked if SAS is compromised by a malicious party. Additionally, malicious secondary users (SUs) may perform an inference attack, which could also reveal the location of these military radars. In this thesis, we demonstrate a SAS-framework, SZ-SAS, which allows SAS to function without direct knowledge of user information. We also propose techniques for mitigating the inference attack which are compatible with SZ-SAS
|
373 |
Narrative Generation to Support Causal Exploration of Directed GraphsChoudhry, Arjun 02 June 2020 (has links)
Causal graphs are a useful notation to represent the interplay between the actors as well as the polarity and strength of the relationship that they share. They are used extensively in educational, professional, and industrial contexts to simulate different scenarios, validate behavioral aspects, visualize the connections between different processes, and explore the adversarial effects of changing certain nodes. However, as the size of the causal graphs increase, interpreting them also becomes increasingly tougher. In such cases, new analytical tools are required to enhance the user's comprehension of the graph, both in terms of correctness and speed. To this purpose, this thesis introduces 1) a system that allows for causal exploration of directed graphs, while enabling the user to see the effect of interventions on the target nodes, 2) the use of natural language generation techniques to create a coherent passage explaining the propagation effects, and 3) results of an expert user study validating the efficacy of the narratives in enhancing the user's understanding of the causal graphs. In overall, the system aims to enhance user experience and promote further causal exploration. / Master of Science / Narrative generation is the art of creating coherent snippets of text that cumulatively describe a succession of events, played across a period of time. These goals of narrative generation are also shared by causal graphs – models that encapsulate inferences between the nodes through the strength and polarity of the connecting edges. Causal graphs are an useful mechanism to visualize changes propagating amongst nodes in the system. However, as the graph starts addressing real-world actors and their interactions, it becomes increasingly difficult to understand causal inferences between distant nodes, especially if the graph is cyclic. Moreover, if the value of more than a single node is altered and the cumulative effect of the change is to be perceived on a set of target nodes, it becomes extremely difficult to the human eye. This thesis attempts to alleviate this problem by generating dynamic narratives detailing the effect of one or more interventions on one or more target nodes, incorporating time-series analysis, Wikification, and spike detection. Moreover, the narrative enhances the user's understanding of the change propagation occurring in the system. The efficacy of the narrative was further corroborated by the results of user studies, which concluded that the presence of the narrative aids the user's confidence level, correctness, and speed while exploring the causal network.
|
374 |
Energy And Power Systems Simulated Attack Algorithm For Defense Testbed And AnalysisRuttle, Zachary Andrew 31 May 2023 (has links)
The power grid has evolved over the course of many decades with the usage of cyber systems and communications such as Supervisory Control And Data Acquisition (SCADA); however, due to their connectivity to the internet, the cyber-power system can be infiltrated by malicious attackers. Encryption is not a singular solution. Currently, there are several cyber security measures in development, including those based on artificial intelligence. However, there is a need for a varying but consistent attack algorithm to serve as a testbed for these AI or other practices to be trained and tested. This is important because in the event of a real attacker, it is not possible to know exactly where they will attack and in what order. Therefore, the proposed method in this thesis is to use criminology concepts and fuzzy logic inference to create this algorithm and determine its effectiveness in making decisions on a cyber-physical system model. The method takes various characteristics of the attacker as an input, builds their ideal target node, and then compares the nodes to the high-impact target and chooses one as the goal. Based on that target and their knowledge, the attackers will attack nodes if they have resources. The results show that the proposed method can be used to create a variety of attacks with varying damaging effects, and one other set of tests shows the possibility for multiple attacks, such as denial of service and false data injection. The proposed method has been validated using an extended cyber-physical IEEE 13-node distribution system and sensitivity tests to ensure that the ruleset created would take each of the inputs well. / Master of Science / For the last decades, information and communications technology has become more commonplace for electric power and energy systems around the world. As a result, it has attracted hackers to take advantage of the cyber vulnerabilities to attack critical systems and cause damage, e.g., the critical infrastructure for electric energy. The power grid is a wide-area, distributed infrastructure with numerous power plants, substations, transmission and distribution lines as well as customer facilities. For operation and control, the power grid needs to acquire measurements from substations and send control commands from the control center to substations. The cyber-physical system has its vulnerabilities that can be deployed by hackers to launch falsified measurements or commands. Much research is concerned with how to detect and mitigate cyber threats. These methods are used to determine if an attack is occurring, and, if so, what to do about it. However, for these techniques to work properly, there must be a way to test how the defense will understand the purpose and target of an actual attack, which is where the proposed modeling and simulation method for an attacker comes in. Using a set of values for their resources, motivation and other characteristics, the defense algorithm determines what the attacker's best target would be, and then finds the closest point on the power grid that they can attack. While there are still resources remaining based on the initial value, the attacker will keep choosing places and then execute the attack. From the results, these input characteristic values for the attacker can affect the decisions the attacker makes, and the damage to the system is reflected by the values too. This is tested by looking at the results for the high-impact nodes for each input value, and seeing what came out of it. This shows that it is possible to model an attacker for testing purposes on a simulation.
|
375 |
NOISE AWARE BAYESIAN PARAMETER ESTIMATION IN BIOPROCESSES: USING NEURAL NETWORK SURROGATE MODELS WITH NON-UNIFORM DATA SAMPLING / NOISE AWARE BAYESIAN PARAMETER ESTIMATION IN BIOPROCESSESWeir, Lauren January 2024 (has links)
This thesis demonstrates a parameter estimation technique for bioprocesses that utilizes
measurement noise in experimental data to determine credible intervals on parameter
estimates, with this information of potential use in prediction, robust control,
and optimization. To determine these estimates, the work implements Bayesian inference
using nested sampling, presenting an approach to develop neural network (NN)
based surrogate models. To address challenges associated with non-uniform sampling
of experimental measurements, an NN structure is proposed. The resultant surrogate
model is utilized within a Nested Sampling Algorithm that samples possible parameter
values from the parameter space and uses the NN to calculate model output
for use in the likelihood function based on the joint probability distribution of the
noise of output variables. This method is illustrated against simulated data, then
with experimental data from a Sartorius fed-batch bioprocess. Results demonstrate
the feasibility of the proposed technique to enable rapid parameter estimation for
bioprocesses. / Thesis / Master of Applied Science (MASc) / Bioprocesses require models that can be developed quickly for rapid production of desired
pharmaceuticals. Parameter estimation is necessary for these models, especially
first principles models. Generating parameter estimates with confidence intervals is
important for model based control. Challenges with parameter estimation that must
be addressed are the presence of non-uniform sampling and measurement noise in
experimental data. This thesis demonstrates a method of parameter estimation that
generates parameter estimates with credible intervals by incorporating measurement
noise in experimental data, while also employing a dynamic neural network surrogate
model that can process non-uniformly sampled data. The proposed technique
implements Bayesian inference using nested sampling and was tested against both
simulated and real experimental fed-batch data.
|
376 |
Incremental Learning approaches to Biomedical decision problemsTortajada Velert, Salvador 21 September 2012 (has links)
During the last decade, a new trend in medicine is transforming the nature of healthcare from reactive to proactive. This new paradigm is changing into a personalized medicine where the prevention, diagnosis, and treatment of disease is focused on individual patients. This paradigm is known as P4 medicine. Among other key benefits, P4 medicine aspires to detect diseases at an early stage and introduce diagnosis to stratify patients and diseases to select the optimal therapy based on individual observations and taking into account the patient outcomes to empower the physician, the patient, and their communication.
This paradigm transformation relies on the availability of complex multi-level biomedical data that are increasingly accurate, since it is possible to find exactly the needed information, but also exponentially noisy, since the access to that information is more and more challenging. In order to take advantage of this information, an important effort is being made in the last decades to digitalize medical records and to develop new mathematical and computational methods for extracting maximum knowledge from patient records, building dynamic and disease-predictive models from massive amounts of integrated clinical and biomedical data. This requirement enables the use of computer-assisted Clinical Decision Support Systems for the management of individual patients.
The Clinical Decision Support System (CDSS) are computational systems that provide precise and specific knowledge for the medical decisions to be adopted for diagnosis, prognosis, treatment and management of patients. The CDSS are highly related to the concept of evidence-based medicine since they infer medical knowledge from the biomedical databases and the acquisition protocols that are used for the development of the systems, give computational support based on evidence for the clinical practice, and evaluate the performance and the added value of the solution for each specific medical problem. / Tortajada Velert, S. (2012). Incremental Learning approaches to Biomedical decision problems [Tesis doctoral]. Universitat Politècnica de València. https://doi.org/10.4995/Thesis/10251/17195
|
377 |
Event classification and location prediction from tweets during disastersSingh, J.P., Dwivedi, Y.K., Rana, Nripendra P., Kumar, A., Kapoor, K.K. 25 September 2020 (has links)
Yes / Social media is a platform to express one’s view in real time. This real time nature of social media makes it an attractive tool for disaster management, as both victims and officials can put their problems and solutions at the same place in real time. We investigate the Twitter post in a flood related disaster and propose an algorithm to identify victims asking for help. The developed system takes tweets as inputs and categorizes them into high or low priority tweets. User location of high priority tweets with no location information is predicted based on historical locations of the users using the Markov model. The system is working well, with its classification accuracy of 81%, and location prediction accuracy of 87%. The present system can be extended for use in other natural disaster situations, such as earthquake, tsunami, etc., as well as man-made disasters such as riots, terrorist attacks etc. The present system is first of its kind, aimed at helping victims during disasters based on their tweets.
|
378 |
Causal Inference Using Bayesian Network For Search And RescueBelden, Amanda 01 June 2024 (has links) (PDF)
People who are considered missing have much higher probabilities of being found dead compared to those who are not considered missing in terms of Search and Rescue (SAR) missions. Dementia patients are incredibly likely to be declared missing, and in fact after removing those with dementia the probability of the mission being regarded as missing person case is only about 10%. Additionally, those who go missing are much more likely to be on private land than on protected areas such as forests and parks. These and similar associations can be represented and investigated using a Bayesian network that has been trained on Search and Rescue mission data. By finding associations between factors that affect these missions, SAR teams can find patterns in historical cases and apply them to future cases in order to narrow down their search areas, improve their plans, and hopefully lead to lower search times and fewer deaths and unsolved cases. Causal inference allows causal relationships to be determined, telling SAR teams that they can make current decisions based on these learned relationships and their decisions will cause the change that they expect based on the Bayesian network.
|
379 |
Learning and Reasoning in Hybrid Structured SpacesMorettin, Paolo 29 May 2020 (has links)
Many real world AI applications involve reasoning on both
continuous and discrete variables, while requiring some level of
symbolic reasoning that can provide guarantees on the system's
behaviour. Unfortunately, most of the existing probabilistic models do
not efficiently support hard constraints or they are limited to purely
discrete or continuous scenarios.
Weighted Model Integration (WMI) is a recent and general formalism
that enables probabilistic modeling and inference in hybrid structured
domains. A difference of WMI-based inference algorithms with respect
to most alternatives is that probabilities are computed inside
a structured support involving both logical and algebraic
relationships between variables.
While some progress has been made in the last years and the topic is
increasingly gaining interest from the community, research in this
area is at an early stage.
These aspects motivate the study of hybrid and symbolic probabilistic
models and the development of scalable inference procedures and
effective learning algorithms in these domains.
This PhD Thesis embodies my effort in studying scalable reasoning and
learning techniques in the context of WMI.
|
380 |
Essays on DSGE Models and Bayesian EstimationKim, Jae-yoon 11 June 2018 (has links)
This thesis explores the theory and practice of sovereignty. I begin with a conceptual analysis of sovereignty, examining its theological roots in contrast with its later influence in contestations over political authority. Theological debates surrounding God’s sovereignty dealt not with the question of legitimacy, which would become important for political sovereignty, but instead with the limits of his ability. Read as an ontological capacity, sovereignty is coterminous with an existent’s activity in the world. As lived, this capacity is regularly limited by the ways in which space is produced via its representations, its symbols, and its practices. All collective appropriations of space have a nomos that characterizes their practice. Foucault’s account of “biopolitics” provides an account of how contemporary materiality is distributed, an account that can be supplemented by sociological typologies of how city space is typically produced. The collective biopolitical distribution of space expands the range of practices that representationally legibilize activity in the world, thereby expanding the conceptual limits of existents and what it means for them to act up to the borders of their capacity, i.e., to practice sovereignty. The desire for total authorial capacity expresses itself in relations of domination and subordination that never erase the fundamental precarity of subjects, even as these expressions seek to disguise it. I conclude with a close reading of narratives recounting the lives of residents in Chicago’s Englewood, reading their activity as practices of sovereignty which manifest variously as they master and produce space. / Ph. D. / For an empirical analysis the statistical model implied in the theoretical model is crucial. The statistical model is simply the set of probabilistic assumptions imposed on the data, and invalid probabilistic assumptions undermines the reliability of statistical inference, rendering the empirical analysis untrustworthy. Hence, for securing trustworthy evidence one should always validate the implicit statistical model before drawing any empirical result from a theoretical model. This perspective is used to shed light on a widely used category of macroeconometric models known as Dynamic Stochastic General Equilibrium (DSGE) Models. Using U.S. time-series data, the paper demonstrates that a widely used econometric model for the U.S. economy is severely statistically misspecified; almost all of its probabilistic assumptions are invalid for the data. The paper proceeds to respecify the implicit statistical model behind the theoretical model with a view to secure its statistical adequacy (validity of its probabilistic assumptions). Using the respecified statistical model, the paper calls into question the literature evaluating the theoretical adequacy of current DSGE models, ignoring the fact that such evaluations are untrustworthy because they are based on statistically unreliable procedures.
|
Page generated in 0.0793 seconds