881 |
Aditivní dvojice v kvantitativní teorii typů / Additive Pairs in Quantitative Type TheorySvoboda, Tomáš January 2021 (has links)
Both dependent types and linear types have their desirable properties. Department types can express functional dependencies of inputs and outputs, while linear types offer control over the use of computational resources. Combining these two systems have been difficult because of their different interpretations of context presence of variables. Quantitative Type Theory (QTT) combines dependent types and linear types by using a semiring to track the kind of use of every resource. We extend QTT with the additive pair and additive unit types, express the complete QTT rules in bidirectional form, and then present our interpreter of a simple language based on QTT. 1
|
882 |
Calculus Misconceptions of Undergraduate StudentsMcDowell, Yonghong L. January 2021 (has links)
It is common for students to make mistakes while solving mathematical problems. Some of these mistakes might be caused by the false ideas, or misconceptions, that students developed during their learning or from their practice.
Calculus courses at the undergraduate level are mandatory for several majors. The introductory course of calculus—Calculus I—requires fundamental skills. Such skills can prepare a student for higher-level calculus courses, additional higher-division mathematics courses, and/or related disciplines that require comprehensive understanding of calculus concepts. Nevertheless, conceptual misunderstandings of undergraduate students exist universally in learning calculus. Understanding the nature of and reasons for how and why students developed their conceptual misunderstandings—misconceptions—can assist a calculus educator in implementing effective strategies to help students recognize or correct their misconceptions.
For this purpose, the current study was designed to examine students’ misconceptions in order to explore the nature of and reasons for how and why they developed their misconceptions through their thought process. The study instrument—Calculus Problem-Solving Tasks (CPSTs)—was originally created for understanding the issues that students had in learning calculus concepts; it features a set of 17 open-ended, non-routine calculus problem-solving tasks that check students’ conceptual understanding. The content focus of these tasks was pertinent to the issues undergraduate students encounter in learning the function concept and the concepts of limit, tangent, and differentiation that scholars have subsequently addressed. Semi-structured interviews with 13 mathematics college faculty were conducted to verify content validity of CPSTs and to identify misconceptions a student might exhibit when solving these tasks. The interview results were analyzed using a standard qualitative coding methodology. The instrument was finalized and developed based on faculty’s perspectives about misconceptions for each problem presented in the CPSTs.
The researcher used a qualitative methodology to design the research and a purposive sampling technique to select participants for the study. The qualitative means were helpful in collecting three sets of data: one from the semi-structured college faculty interviews; one from students’ explanations to their solutions; and the other one from semi-structured student interviews. In addition, the researcher administered two surveys (Faculty Demographic Survey for college faculty participants and Student Demographic Survey for student participants) to learn about participants’ background information and used that as evidence of the qualitative data’s reliability. The semantic analysis techniques allowed the researcher to analyze descriptions of faculty’s and students’ explanations for their solutions. Bar graphs and frequency distribution tables were presented to identify students who incorrectly solved each problem in the CPSTs.
Seventeen undergraduate students from one northeastern university who had taken the first course of calculus at the undergraduate level solved the CPSTs. Students’ solutions were labeled according to three categories: CA (correct answer), ICA (incorrect answer), and NA (no answer); the researcher organized these categories using bar graphs and frequency distribution tables. The explanations students provided in their solutions were analyzed to isolate misconceptions from mistakes; then the analysis results were used to develop student interview questions and to justify selection of students for interviews. All participants exhibited some misconceptions and substantial mistakes other than misconceptions in their solutions and were invited to be interviewed. Five out of the 17 participants who majored in mathematics participated in individual semi-structured interviews. The analysis of the interview data served to confirm their misconceptions and identify their thought process in problem solving. Coding analysis was used to develop theories associated with the results from both college faculty and student interviews as well as the explanations students gave in solving problems. The coding was done in three stages: the first, or initial coding, identified the mistakes; the second, or focused coding, separated misconceptions from mistakes; and the third elucidated students’ thought processes to trace their cognitive obstacles in problem solving.
Regarding analysis of student interviews, common patterns from students’ cognitive conflicts in problem solving were derived semantically from their thought process to explain how and why students developed the misconceptions that underlay their mistakes. The nature of how students solved problems and the reasons for their misconceptions were self-directed and controlled by their memories of concept images and algorithmic procedures. Students seemed to lack conceptual understanding of the calculus concepts discussed in the current study in that they solved conceptual problems as they would solve procedural problems by relying on fallacious memorization and familiarity. Meanwhile, students have not mastered the basic capacity to generalize and abstract; a majority of them failed to translate the semantics and transliterate mathematical notations within the problem context and were unable to synthesize the information appropriately to solve problems.
|
883 |
Real-Time Ray Tracing With Polarization ParametersEnfeldt, Viktor January 2020 (has links)
Background. The real-time renderers used in video games and similar graphics applications do not model the polarization aspect of light. Polarization parameters have previously been incorporated in some offline ray-traced renderers to simulate polarizing filters and various optical effects. As ray tracing is becoming more and more prevalent in real-time renderers, these polarization techniques could potentially be used to simulate polarization and its optical effects in real-time applications as well. Objectives. This thesis aims to determine if an existing polarization technique from offline renderers is, from a performance standpoint, viable to use in real-time ray-traced applications to simulate polarizing filters, or if further optimizations and simplifications would be needed. Methods. Three ray-traced renderers were implemented using the DirectX RayTracing API: one polarization-less Baseline version; one Polarization version using an existing polarization technique; and one optimized Hybrid version, which is a combination of the other two. Their performance was measured and compared in terms of frametimes and VRAM usage in three different scenes and with five different ray counts. Results. The Polarization renderer is ca. 30% slower than the Baseline in the two more complex scenes, and the Hybrid version is around 5–15% slower than the Baseline in all tested scenes. The VRAM usage of the Polarization version was higher than the Baseline one in the tests with higher ray counts, but only by negligible amounts. Conclusions. The Hybrid version has the potential to be used in real-time applications where high frame rates are important, but not paramount (such as the commonly featured photo modes in video games). The performance impact of the Polarization renderer's implementation is greater, but it could potentially be used as well. Due to limitations in the measurement process and the scale of the test application, no conclusions could be made about the implementations' impact on VRAM usage. / Bakgrund. Realtidsrenderarna som används i videospel och liknande grafikapplikationer simulerar inte ljusets polarisering. Polariseringsinformation har tidigare implementerats i vissa stålföljningsbaserade (ray-traced) offline-renderare för att simulera polariseringsfilter och diverse optiska effekter. Eftersom strålföljning har blivit allt vanligare i realtidsrenderare så kan dessa polariseringstekniker potentiellt också användas för att simulera polarisering och dess optiska effekter i sådana program. Syfte. Syftet med denna rapport är att avgöra om en befintlig polariseringsteknik från offline-renderare, från en prestandasynpunkt, är lämplig att använda för att simulera polariseringsfilter i stålföljningsbaserade realtidsapplikationer, eller om ytterligare optimeringar och förenklingar behövs. Metod. DirectX RayTracing API:et har använts för att implementera tre stålföljningsbaserade realtidsrenderare: en polarisationsfri Baseline-version; en Polarization-version med en befintlig polariseringsteknik; och en optimerad Hybrid-version, som är en kombination av de andra två. Deras prestanda mättes och jämfördes med avseende på frametime och VRAM-användning i tre olika scener och med fem olika antal strålar per pixel. Resultat. Polarization-versionen är ca 30% långsammare än Baseline-versionen i de två mest komplexa scenerna, och Hybrid-versionen är ca 5–15% långsammare än Baseline-versionen i alla testade scener. Polarization-versionens VRAM-användningen var högre än Baseline-versions i testerna med högre strålantal, men endast med försumbara mängder. Slutsatser. Hybrid-versionen har potential att användas i realtidsapplikationer där höga bildhastigheter är viktiga, men inte absolut nödvändiga (exempelvis de vanligt förekommande fotolägena i videospel). Polarization-versionens implementation hade sämre prestanda, men även den skulle potentiellt kunna användas i sådana applikationer. På grund av mätprocessens begränsningar och testapplikationens omfattning så kunde inga slutsatser dras gällande implementeringarnas påverkan på VRAM-användning.
|
884 |
Malliavin-Stein Method in Stochastic GeometrySchulte, Matthias 19 March 2013 (has links)
In this thesis, abstract bounds for the normal approximation of Poisson functionals are computed by the Malliavin-Stein method and used to derive central limit theorems for problems from stochastic geometry. As a Poisson functional we denote a random variable depending on a Poisson point process. It is known from stochastic analysis that every square integrable Poisson functional has a representation as a (possibly infinite) sum of multiple Wiener-Ito integrals. This decomposition is called Wiener-Itô chaos expansion, and the integrands are denoted as kernels of the Wiener-Itô chaos expansion. An explicit formula for these kernels is known due to Last and Penrose.
Via their Wiener-Itô chaos expansions the so-called Malliavin operators are defined. By combining Malliavin calculus and Stein's method, a well-known technique to derive limit theorems in probability theory, bounds for the normal approximation of Poisson functionals in the Wasserstein distance and vectors of Poisson functionals in a similar distance were obtained by Peccati, Sole, Taqqu, and Utzet and Peccati and Zheng, respectively. An analogous bound for the univariate normal approximation in Kolmogorov distance is derived.
In order to evaluate these bounds, one has to compute the expectation of products of multiple Wiener-Itô integrals, which are complicated sums of deterministic integrals. Therefore, the bounds for the normal approximation of Poisson functionals reduce to sums of integrals depending on the kernels of the Wiener-Itô chaos expansion.
The strategy to derive central limit theorems for Poisson functionals is to compute the kernels of their Wiener-Itô chaos expansions, to put the kernels in the bounds for the normal approximation, and to show that the bounds vanish asymptotically.
By this approach, central limit theorems for some problems from stochastic geometry are derived. Univariate and multivariate central limit theorems for some functionals of the intersection process of Poisson k-flats and the number of vertices and the total edge length of a Gilbert graph are shown. These Poisson functionals are so-called Poisson U-statistics which have an easier structure since their Wiener-Itô chaos expansions are finite, i.e. their Wiener-Itô chaos expansions consist of finitely many multiple Wiener-Itô integrals. As examples for Poisson functionals with infinite Wiener-Itô chaos expansions, central limit theorems for the volume of the Poisson-Voronoi approximation of a convex set and the intrinsic volumes of Boolean models are proven.
|
885 |
Unraveling the Paradox: Balancing Personalization and Privacy in AI-Driven Technologies : Exploring Personal Information Disclosure Behavior to AI Voice Assistants and Recommendation SystemsSaliju, Leona, Deboi, Vladyslav January 2023 (has links)
As society progresses towards a more algorithmic era, the influence of artificial intelligence (AI) is driving a revolution in the digital landscape. At its core, AI applications aim to engage customers by providing carefully tailored and data-driven personalization and customization of products, services, and marketing mix elements. However, the adoption of AI, while promising enhanced personalization, poses challenges due to the increased collection, analysis, and control of consumer data by technology owners. Consequently, concerns over data privacy have emerged as a primary consideration for individuals. This paper delves deeper into the implications of the personalization- privacy paradox, aiming to provide a comprehensive analysis of the challenges and opportunities it presents. The purpose of this thesis is to understand users’ privacy concerns and willingness to disclose their personal information to AI technologies by addressing the limitations of previous research and utilizing qualitative methods to gain a more in-depth understanding of consumer views. To understand users’ privacy concerns and willingness to disclose personal information to AI technologies, a qualitative approach was followed. Combining a deductive and inductive approach to fulfill the purpose of the study, empirical data was collected through 20 semi- structured interviews. The participants were chosen using a purposive sampling technique. Users’ privacy concerns and willingness to disclose personal information to AI technologies differ significantly. It depends not only on the individual, but also on the type of AI technology, the company providing the AI technology, the possibility of obtaining additional benefits, and whether the company is transparent about its data collection and can provide proof of security.
|
886 |
Programmatic Advertising : Effective marketing strategy or invasion of privacy - A study of consumer attitudes towards Programmatic AdvertisingBolkvadze, Endi, Ekblad, Rebecka January 2022 (has links)
Digital marketing is constantly adapting and evolving in line with technological advances. One of these advances is the digitalization, which has given rise to Programmatic Advertising (PA). In order to practice PA, the companies need to collect data about consumers' preferences and personal interests. On the other hand, consumers have a need to protect their privacy. The needs of these two parties cross each other which creates the tension, called the Personalization-privacy paradox. In this study, we intend to investigate consumers' attitudes towards PA and whether personalization gives rise to improved browsing experiences or evenviolates their privacy. A quantitative study was conducted, where the independent variable was called Personalization and the dependent variables - Attractiveness, Annoyance, Invasiveness and Trade-off. The results of bivariate regression analysis showed that all of the dependent variables of the study were statistically significant. The results also illustrated that the majority of the respondents experienced PA ads as beneficial, but also invasive. These results are in line with the Utility maximization theory, as PA ads were considered both beneficial and risky. Therefore, consumers would have incentives to disclose their personal information as long as the percieved benefits would outweigh perceived risks, generated by PA. We concluded that there are no clear, predetermined answers to what attitudes consumers have towards PA, but this can vary from case to case, which is in line with both the Privacy calculus theory and the Utility maximization theory. This involves a risk-benefit analysis, performed by consumers, where perceived benefits exceeding perceived risks would generate positive attitudes and vice versa
|
887 |
Language Modeling Using Image Representations of Natural LanguageCho, Seong Eun 07 April 2023 (has links) (PDF)
This thesis presents training of an end-to-end autoencoder model using the transformer, with an encoder that can encode sentences into fixed-length latent vectors and a decoder that can reconstruct the sentences using image representations. Encoding and decoding sentences to and from these image representations are central to the model design. This method allows new sentences to be generated by traversing the Euclidean space, which makes vector arithmetic possible using sentences. Machines excel in dealing with concrete numbers and calculations, but do not possess an innate infrastructure designed to help them understand abstract concepts like natural language. In order for a machine to process language, scaffolding must be provided wherein the abstract concept becomes concrete. The main objective of this research is to provide such scaffolding so that machines can process human language in an intuitive manner.
|
888 |
Quasiconformal maps on a 2-step Carnot groupGardiner, Christopher James 17 July 2017 (has links)
No description available.
|
889 |
Ascophyllym Nodosum – påverkan på det orala placket och dess proteaserSchwech, Nurda, Krupic, Sanja January 2013 (has links)
Syfte: Syftet med denna studie var att studera huruvida algen Ascophyllum Nodosum (AN) utövar någon effekt på proteasaktivitet i oralt plack, samt om effekten finns i algen från början eller om man måste inta den oralt för att få en systemisk effekt.En förhöjd proteasaktivitet har förknippats med gingivit och parodontit. Vi förväntar oss en minskad proteasaktivitet hos försökspersonerna, och därmed en minskad risk för gingivit och parodontit, efter intag av AN under en månads tid. Material och metod: Ett in vitro med en pilotstudie, och ett in vivo försök utfördes. I in vitro försöket användes pulveriserat och upplöst Ascophyllum Nodosum. I in vitro studien deltog 5 personer i åldersgruppen 20 till 30 år. Plackprover på försökspersonerna togs före och efter intag av algen i 4 veckor. Under båda försökstillfällena fick försökspersonerna inte ha borstat tänderna på 12h innan försöket. Resultat: Vid kombinering av pilotstudien och in vitro studien ses ingen signifikant skillnad gällande Ascophyllum Nodosums proteasaktivitet i pulveriserad form. Våra resultat erhållna från in vivo studien visar att det har skett en ökad proteasaktivitet i placket hos försökspersonerna efter en månads intag av Ascophyllum Nodosum. Konklusion: Denna studie visar på en tendens till en ökad proteasaktivitet orsakad av Ascophyllum nodosum. Studien har inte undersökt vilka proteaser som påverkats. På grund av komplexiteten i den orala miljön och de många olika typerna av proteaser, behöver fler studier utföras för att studera de exakta effekterna på den orala miljön. / Aims: The purpose of this study was to investigate if the alga Ascophyllum Nodosum (AN) exerts any effect on protease activity in plaque, if such an effect is present in the algae from the beginning or if it has to be taken orally to exert a systemic effect.Increased protease activity has been associated with gingivitis and periodontitis. We expected a reduced protease activity, and thus a potentially reduced risk for gingivitis and periodontitis, after ingestion of AN for a month.Materials and methods: One in vitro trial with a pilot study, and one in vivo trial was carried out. In the in vitro trial pulverized and dissolved AN was used to make a solution, tested for protease activity.In the in vitro study 5 subjects, aged 20 to 30 years, participated. Plaque samples were taken before and after ingestion of the algae for 4 weeks. Subjects were instructed not to brush their teeth 12 h before sampling.Results: When combining the results from the pilot and in vitro studies, no AN protease activity could be detected. Our in vivo results showed an increased protease activity in the plaque after a month of AN intake.Conclusion: This study indicates a tendency to an increased protease activity caused by Ascophyllum Nodosum. However, the study did not examine which protease was affected. Because of the complexity in the oral environment and the many different types of protease, more studies need to be executed to study the exact effects of AN on the oral environment.
|
890 |
ADVANCING INTEGRAL NONLOCAL ELASTICITY VIA FRACTIONAL CALCULUS: THEORY, MODELING, AND APPLICATIONSWei Ding (18423237) 24 April 2024 (has links)
<p dir="ltr">The continuous advancements in material science and manufacturing engineering have revolutionized the material design and fabrication techniques therefore drastically accelerating the development of complex structured materials. These novel materials, such as micro/nano-structures, composites, porous media, and metamaterials, have found important applications in the most diverse fields including, but not limited to, micro/nano-electromechanical devices, aerospace structures, and even biological implants. Experimental and theoretical investigations have uncovered that as a result of structural and architectural complexity, many of the above-mentioned material classes exhibit non-negligible nonlocal effects (where the response of a point within the solid is affected by a collection of other distant points), that are distributed across dissimilar material scales.</p><p dir="ltr">The recognition that nonlocality can arise within various physical systems leads to a challenging scenario in solid mechanics, where the occurrence and interaction of nonlocal elastic effects need to be taken into account. Despite the rapidly growing popularity of nonlocal elasticity, existing modeling approaches primarily been concerned with the most simplified form of nonlocality (such as low-dimensional, isotropic, and homogeneous nonlocal problems), which are often inadequate to identify the nonlocal phenomena characterizing real-world problems. Further limitations of existing approaches also include the inability to achieve a mathematically well-posed theoretical and physically consistent framework for nonlocal elasticity, as well as the absence of numerical approaches to achieving efficient and accurate nonlocal simulations. </p><p dir="ltr">The above discussion identifies the significance of developing theoretical and numerical methodologies capable of capturing the effect of nonlocal elastic behavior. In order to address these technical limitations, this dissertation develops an advanced continuum mechanics-based approach to nonlocal elasticity by using fractional calculus - the calculus of integrals and derivatives of arbitrary real or even complex order. Owing to the differ-integral definition, fractional operators automatically possess unusual characteristics such as memory effects, nonlocality, and multiscale capabilities, that make fractional operators mathematically advantageous and also physically interpretable to develop advanced nonlocal elasticity theories. In an effort to leverage the unique nonlocal features and the mathematical properties of fractional operators, this dissertation develops a generalized theoretical framework for fractional-order nonlocal elasticity by implementing force-flux-based fractional-order nonlocal constitutive relations. In contrast to the class of existing nonlocal approaches, the proposed fractional-order approach exhibits significant modeling advantages in both mathematical and physical perspectives: on the one hand, the mathematical framework only involves nonlocal formulations in stress-strain constitutive relationships, hence allowing extensions (by incorporating advanced fractional operator definitions) to model more complex physical processes, such as, for example, anisotropic and heterogeneous nonlocal effects. On the other hand, the nonlocal effects characterized by force-flux fractional-order formulations can be physically interpreted as long-range elastic spring forces. These advantages grant the fractional-order nonlocal elasticity theory the ability not only to capture complex nonlocal effects, but more remarkably, to bridge gaps between mathematical formulations and nonlocal physics in real-world problems.</p><p>An efficient nonlocal multimesh finite element method is then developed to solve partial integro-differential governing equations in the fractional-order nonlocal elasticity to further enable nonlocal simulations as well as practical applications. The most remarkable consequence of this numerical method is the mesh-decoupling technique. By separating the numerical discretization and approximation between the weak-form integral and nonlocal integral, this approach surpasses the limitations of existing nonlocal algorithms and achieves both accurate and efficient finite element solutions. Several applications are conducted to verify the effectiveness of the proposed fractional-order nonlocal theory and the associated multimesh finite element method in simulating nonlocal problems. By considering problems with increasing complexity ranging from one-dimensional to three-dimensional problems, from isotropic to anisotropic problems, and from homogeneous to heterogeneous nonlocality, these applications have demonstrated the effectiveness and robustness of the theory and numerical approach, and further highlighted their potential to effectively model a wider range of nonlocal problems encountered in real-world applications.</p>
|
Page generated in 0.0539 seconds