Below you will find a schedule of events as well as session details available electronically. The full program is available in a .pdf format by clicking here. All listed times are UTC-05:00 Eastern Time (US & Canada).
Glaucoma is the second leading cause of blindness worldwide and is characterized by retinal ganglion cell death. Previous studies have linked impaired blood flow and increased venous oxygen saturation to glaucoma, but there is an ongoing controversy regarding whether these factors are primary or secondary to disease. Mathematical modeling has emerged as a useful tool to help decipher the role of hemodynamics in glaucoma. In this study, a theoretical model of the human retina is extrapolated from a previous mouse model based on confocal microscopy images. Oximetry data from the human retina are used to convert the murine vascular network to a human network by adapting: (i) the number of main arterial branches and the angles between them, (ii) vessel diameters, and (iii) vessel lengths. In the human model, oxygen levels in the retinal arterioles and surrounding tissue are calculated using a Greens function approach. The substantial increase in standard deviation of the predicted partial pressure of oxygen with increasing oxygen demand indicates the importance of simulating flow and oxygen within a heterogeneous network. The heterogeneous arrangement of arterioles in this model accounts for the complex geometry of blood vessels and diffusion of oxygen from multiple sources into one tissue point and is currently being linked to a compartmental model of the capillaries and venules to establish a complete model of the retinal microcirculation. Such a model has the potential to impact diagnosis and treatment strategies for glaucoma patients.
This work introduces a mathematical hybrid model of the human retinal microcirculation that combines a heterogeneous vascular description of the arterioles with a compartmental representation of the capillaries and venule to predict retinal blood flow and tissue oxygenation. The spatially heterogeneous model of arterioles in C++ is linked to the compartmental model in MATLAB to generate a hybrid model that is established by connecting every terminal arteriole to a series of compartments representing capillaries, small venules, and large venules. A Greens function method is used to model oxygen transport in the arterioles, and a Krogh cylinder model is used in the capillaries and venules. A metabolic wall signal is calculated from blood and tissue partial pressure of oxygen and is conducted upstream to communicate the metabolic status of the retina to the arterioles. The signal generated by the hybrid model is compared with the signal generated by an entirely compartmental model to demonstrate the role of spatial heterogeneity in the system. A more than two-fold range of wall signal values is communicated upstream and depends on vascular path lengths and partial pressure of oxygen. This model provides the geometric and hemodynamic framework necessary to predict blood flow regulation in the human retina and will ultimately be used for early detection and treatment of ischemic and metabolic disorders of the eye.
In an expansion of a completed SIR mathematical Model research project, students Cody Dosch, Heather Kwolek, Jack DeGroot, and faculty advisor Dr.Craig Johnson create a SVEIR mathematical model for the spread of Covid-19. The SIR model is a simple mathematical model of epidemics that utilizes three compartments of Susceptible, Infected, and Removed. The SVEIR mathematical model is an advanced version of the SIR model with additional compartments of Vaccinated and Exposed. The SVEIR model will provide a more precise outlook on how Covid-19 will spread from person to person in Pennsylvania due to additional parameters. These parameters consist of rates per day that pertain to the vaccine coverage rate over Pennsylvania, the vaccine administered into individuals, symptoms occurring for an infected individual, as well as the recovery rate and death rate of Covid-19. The creation of the SVEIR model was done by expanding our original 3 x 3 system of first-order differential equations to include additional compartments consequently creating a 5 x 5 system of first-order differential equations. We wrote and utilized a C# program that uses the numerical method Runge Kutta to forecast the number of Covid-19 cases within a certain time period t within the population of the state of Pennsylvania. We will utilize our SVEIR model to predict the number of covid cases that will occur within the months of May, June, and July.
This work is focused on determining if econometric factors, such as unemployment, law enforcement expenditure, and per capita income have an effect on crime rates. For our data analysis, we used crime2 data set from the Wooldridge package in RStudio. The data is classified as a special case of time series data, known as two-period panel data, since it contains observations from 46 cities across two periods 1982 and 1987. Classical regression models arent valid for analyzing two-period panel data because the independence assumption isnt satisfied. Linear regression and logistic regression models with the first-difference method were employed to determine the effect of econometric factors on crime rate. The results of the data analyses showed that unemployment has the strongest effect on crime rates in all linear regression models that were performed. Additionally, the logistic regression models demonstrated that, as the unemployment rate increases, the probability of a rise in crime rate increases during the two periods of time.
It is well known that complete graphs on at least 5 edges cannot be embedded in the plane without some edge crossings; however, in general, the minimum possible number of edge crossings over all plane embeddings of K_n is not known. Guy's conjecture states that the minimum number of crossings of K_n embedded in the plane is given by the following formula: Z(n) = 1/4*[n/2]*[(n-1)/2]*f[(n-2)/2]*[(n-3)/2]. Abrego, et. al demonstrated that K_n can be embedded using a 2-page book drawing in a way that achieves Guy's conjectured minimum. In this talk we introduce these ideas together with a new novel embedding scheme called the swirl embedding. We determine the crossing number of K_n under the swirl embedding, and explore techniques to reduce this number.
The quantum Fourier transform (QFT) is at the basis of finding the period of a sequence of natural numbers (otherwise know n as order finding) on which Shor's algorithm relies. QFT is also a necessary step in the quantum phase estimation task. We present the mathematics behind the implementation of a QFT procedure in an accessible manner based on a numerical example that uses only a 3-qubit input. This is part of our effort of approaching particular core ideas of quantum computing theory in a style comprehensible at undergraduate level.
Mild traumatic brain injury (MTBI) could have a negative impact on war veterans health and well-being. Unfortunately, there has yet to be a quantitative method to confirm qualitative interviews used to diagnose patients who are suffering from MTBI. Utilizing brain graphs generated from qEEG scans of 145 veterans, we used different measures from graph theory literature to measure different aspects of these brain graphs. The ultimate goal is to develop a machine learning or other type of predictive model that will allow health professionals to determine whether someone has MTBI, while taking their PTSD levels into account. Our goal for this research project is to find suitable graph measures to serve as inputs for this regression model.
This paper proposes an algebraic method of evaluating certain indeterminate limits. This method does not involve the use of calculus. It is, however, built on calculus. Unlike methods such as LHospitals rule, this proposed methodology does not require a fractional format of functions to be used.
Polycyclic codes are a generalization of cyclic and constacyclic codes. Even though they have been known since 1972 and received some attention more recently, there have not been many studies on polycyclic codes. This paper presents an in-depth investigation of polycyclic codes associated with trinomials. Our results include a number of facts about trinomials, some properties of polycyclic codes, and many new quantum codes derived from polycyclic codes. We also state several conjectures about polynomials and polycyclic codes. Hence, we show useful features of polycyclic codes and present some open problems related to them.
Time has fundamentally different character, in quantum mechanics and relativity. In quantum mechanics, time folds in fixed order while in general relativity the time order is affected by the distribution of matter. Here, initially we consider two quantum massive particles entangled with each other in their spatial superposition. And we consider the relativistic behavior of massive particles affecting time order. Then we propose a thought experiment that leads to entanglement between two distinct temporal orders or time-like events.
Prime numbers are one of the most fundamental objects in mathematics. However, it is impossibly difficult to completely understand the structure of prime numbers. Mankind has made great progress in the understanding of prime numbers since antiquity, notable example includes the proof of the infinitude of prime numbers given by Euclid in 300 BC. However, it is almost embarrassing to admit that we human beings have yet not found answers to many questions in the theory of prime numbers, some of which are very fundamental in nature and appear to be very easy. One of the questions on which progress has been made recently asks, how many prime numbers are there below a fixed number? The answer to that question is roughly n/log n, and it was proved independently by Jacques Hadamard and Charles Jean de la Vall
The role of possession in soccer is an essential part of the game that can show the strength of individual teams. There are two strategies that are prevalent: Tiki-Taka, where the teams pass the ball frequently to dominate play, or counter-attack, where teams use momentum swings or mistakes by the other team to score. In this talk, we develop a model to predict the score for each matchup in the English Premier League for the 2019/2020 season. We find the critical possession percent for each of the 190 matchups, which is the score where we predict a tie. We use this critical possession percent to discuss interesting matchups as case studies and as an avenue to discuss each team's optimal choice of strategy.
Let us denote the set of positive integers less than n and relatively prime to n as R(n). If R(n) can be partitioned into two sets that have equal sums, n is called super totient. We will discuss exceptional totient numbers, which are defined similarly, and provide a classification for them.
While many view mathematics as simply a subject one takes in school, the truth is that mathematics is the foundation for many aspects of our lives. A diverse range of professional fields including defence, aerospace, manufacturing, automobile and even athletic equipment development - rely on mathematics to drive innovation, gather data, model performance and improve technology. Maple is math software that combines the world's most powerful math engine with an interface that makes it extremely easy to analyze, explore, visualize, and solve mathematical problems. With Maple, you aren't forced to choose between mathematical power and usability, making it the ideal tool for both education and research. Whether you are using Maple as a programming tool, to learn mathematical concepts, or to visualize behavior, this talk will demonstrate key features of Maple with practical applications that can support and elevate your research.
Toys have inspired a lot of interesting mathematics. The SpirographTM helps children create lovely curves by rolling a small circle around the inside or the outside of a larger circle. These curves are called hypotrochoids and epitrochoids and are special cases of mathematical curves called roulettes. A roulette is created by following a point attached to one curve as that curve "rolls" along another curve. Another children's toy, the TangleTM, inspired some students and me to investigate roulettes that we get by rolling a circle around the inside of a "tangle curve," which is made up of quarter circles. The resulting roulettes we named "tangloids." In this talk, we will look at many pretty pictures and animations of these curves and discuss some of their interesting properties. As a bonus, I will discuss the nature of generalization, which is very important in mathematics.
Big data is everywhere, and many techniques are being developed across math and computer science to analyze and study data sets. For example, one fundamental concept is how to compare such datasets; this requires new types of distance measurements to study how similar they are. In a related vein, techniques are needed to look for underlying patterns or shapes in the data. Many of these methods pull from fundamentals tools in topology and geometry, which have a rich history for studying such problems. In this talk, we will introduce and discuss the growing field of topological data analysis, which leverages key concepts from topology and geometry to simplify, compare, and analyze a wide range of data sets.
I am conducting research to design a visualization in the area of game theory. This research team includes faculty and colleagues from five universities. We are analyzing the results of over 100 years of computation on Purdues clusters, resulting in more than 100 petabytes of data. The goal is to understand the underlying structure of the mathematics for a large game theory problem. We have built a tool where we can pick any three points in the parameters space of the problem and have our visualization show us the structure of the game in that region. At any (x,y,z)-coordinate, we can use an interactive "hover" feature that reveals the underlying mathematical structure to the user. We are using D3.js, a JavaScript library, to visualize the data. The visualization approach is crucial because the mathematical structure is recursive. We routinely use the visualization tool to zoom into the space, revealing the fine-grain details of the attributes for the space, in a way that would be impossible without this tool. This research is related to a foundational game theory problem that has been open since the 1960s. By conducting this research, we will further the understanding of this foundational game theory problem.
The Lean proof assistant is a tool for verifying and generating proofs using a computer. In the past few years, mathematicians have started using Lean to formalize important results: Gusakov, Mehta, and Miller formalized the proof of Halls Marriage Theorem, and Kontorovich and Gomes formalized the statement of the Riemann Hypothesis. In this talk, we will introduce and demonstrate interactive theorem proving. We will share our motivation for learning about proof assistants, and we will discuss our process of formalizing results about primitive Pythagorean triples in Lean.
The objective of my research was to find the best-fit models of the infrared spectrum data of the dust disks orbiting the white dwarfs G29-38, GD56, and GD 362. My analysis of the infrared spectrum of the dust disk orbiting G29-38 served as the foundation of my project due to the abundance of data available as the brightest dust disk discovered thus far. I utilized a radiative transfer modeling program (RADMC-3D) to create the best-fitting model to date based on G29-38 data. Model parameters that I considered included the inner radius of the disk, the outer radius of the disk, the scale height of the disk, and the size of the dust grains. I calculated chi-squared values and identified parameters that created the best model. In extending my research, I used CASSIS (Combined Atlas of Sources with Spitzer IRS Spectra) to extract data from two other white dwarfs, GD56 and GD362, that have verified dust disks. Through the application of my prior model building methodology using statistical averages of defined parameters, the best-fit models for GD 56 and GD362 were achieved. Parameter adjustments to achieve the best-fit models reveal insights on material composition and age of the respective dust disks and the corresponding white dwarfs. Through modeling the dust disks of white dwarfs, we learn more about the formation and ultimate state of planets, which serves to further the field of astronomy, physical analysis, and statistical analysis.
In 1966, Wassily Leontief published his input-output model of the economy. In this model, Leontief observed the flow of different sectors of the economy and used linear algebra to derive demand, amongst other macroeconomic measurements. In this talk, we briefly describe the input-output model and attempt to adapt it to the Real Estate Market.
The Simplex Algorithm is regarded as one of the most important algorithms ever developed. It has found numerous applications in business, transportation, and research. In implementations of the simplex algorithm, different pivot rules are used to determine the output for the next iteration of the algorithm. There is no single pivot rule that will outperform all other pivot rules in performance for all examples. Even after being known for over seventy years, theoretical studies have yet to determine which pivot rules are optimal for particular problems. This empirical study looks at performance of three classic pivot rule's: Dantzig's Rule, The Greatest Increase Rule, and Bland's Rule on polytopes with the unit sphere tangent to all facets and polytopes with a deformed unit sphere tangent to all facets in 3-dimensions and higher dimensions.
The Prisoners Dilemma is a two-player game in which each player (prisoner) can either cooperate or defect. In the Iterated Prisoners Dilemma (IPD), two players repeatedly play the Prisoners Dilemma game against each other. In our research, we investigate a class of strategies for the IPD, called memory one strategies, that is, probabilistic strategies that depend only on the actions (cooperate or defect) by each of the two players during the previous round of the game. Such strategies are characterized by a vector p= (pCC, pCD, pDC, pDD), where pCC is the probability of cooperating if both players cooperated in the previous round, and pCD,pDC,pDD are defined similarly. Given that a player plays a memory one strategy p, we are interested in the probability that the player wins against a player employing a random memory one strategy q, selected uniformly among all vectors in [0,1]4.In particular, we characterize the strategies p that win, with probability 1,against a random strategy q.
The Harmonic Measure Distribution Function (h-function) of a domain with respect to a fixed basepoint measures the probability that a particle, exhibiting Brownian motion starting from the fixed basepoint, will exit the domain within a radius r of the fixed basepoint. The explicit h-functions for circle domains, a certain class of domains, cannot be found through analytic methods, but the graphs of h-functions for almost all domains, including circle domains, can be approximated through computer simulation. Using computer simulation, this project sought to develop a method of finding, for any given h-function, h(r), a circle domain whose h-function approximates h(r).
If you pick n random numbers in [0,1], what is the probability that their sum also falls into the interval [0,1]? The answer turns out to be 1/n!, which can be seen using symmetry arguments and multidimensional integrals. In our project, we consider generalizations of this question. In particular, we consider the probability that, given n random numbers in [0,1], all sums (or at least one sum) of k of these numbers fall into the interval [0,1]. Questions of this type can be interpreted geometrically as a variation of the broken stick problem that arises in ecology and in terms of a classical problem of Archimedes on the volume of the intersection of cylinders. We give explicit formulas for these probabilities and for their generating functions. In the special case k=n-1, these probabilities can be expressed in terms of Stirling numbers and also in terms of harmonic numbers
An (n_3) configuration is a collection of n lines on the plane with n triple points, where three lines intersect, and with three triple points on each line. Pappus' Theorem describes a (9_3) configuration where the ninth line follows directly from the previous lines; Desargues' Theorem describes a similar (10_3) configuration. We show that there are no such (11_3) or (12_3) configurations out of the 260 distinct configurations via a computer program that calculates the number of free parameters of these configurations. This is a joint work with Moshe Cohen of SUNY New Paltz.
In this article the growth and incubation times for internal cracks due to hydrogen embrittlement are examined. Specifically, these times are modeled in two separate phases, and these two models are coupled using a Padé approximation. The first phase is a long time approximation, when the concentration of atomic hydrogen is not dependent on time and the crack is quasi-stationary. The second phase is a short time approximation, when the concentration of atomic hydrogen is dependent on time and the crack growth is rapid. The approach is similar for both phases. The critical release energy, atomic hydrogen flux, and the volume of the crack as a function of time are incorporated into the Ideal Gas Law. In both phases, the flux is modeled using a boundary value problem, but for the second phase we solved a non-stationary diffusion problem. In both phases, the model yields an integral equation. The incubation time was found by assuming the crack radius a(t) = a(0) and solving the integral equation. The incubation time models were then coupled using a Padé approximation. In the first phase, growth time was found by converting the integral equation to a differential equation and solving the differential equation. With the solved differential equation, a(t) and da/dt were also found. In the first phase, da/dt was constant.
A famous example of a nontransitivity paradox is given by the Efron dice which are four six-sided dice with face values given as follows: A = {0,0,4,4,4,4}, B = {1,1,1,5,5,5}, C = {2,2,2,2,6,6}, D = {3,3,3,3,3,3}. It is easy to check that, with probability 2/3 each, B beats A, C beats B, D beats C, and A beats D. Thus, the dice A, B, C, D form a nontransitive cycle: the probability of each die losing to the next die in the cycle is always greater than 1/2. Motivated by the Efron dice, we consider the question of which n-tuples of probabilities can arise from similar cyclic relations between n independent random variables. More formally, we call an n-tuple (x1,...,xn) with all coordinates in [0,1] cyclic if there exist independent random variables U1,...,Un such that P(U{i+1}>Ui)=xi for i=1,...,n-1 and P(U1>Un)=xn. We call the tuple (x1,...,xn) nontransitive if it is cyclic and in addition satisfies xi>1/2 for all i. The Efron dice construction shows that the 4-tuple (2/3, 2/3, 2/3, 2/3) is cyclic and nontransitive. We characterize the set of cyclic tuples algebraically, and we investigate the probability pn that a random n-tuple (x1,...,xn) is cyclic. We determine pn exactly for n=3, and examine the asymptotic behavior as n goes to infinity.
Liquid crystals are a state of matter having characteristics between those of a liquid and a solid. The molecules in a liquid crystal are free to move about, but they must have roughly the same orientation as neighboring molecules. Hence, a liquid crystal has partial orientational order. Cholesteric liquid crystals are simply liquid crystals that are organized in layers with a helical structure. Liquid crystals have applications in many fields. For example, they can be used in electronic displays and optical imaging. We formulated a model based on continuum mechanics to represent the microscopic order of the cholesteric liquid crystal molecules, using a tensor field as the order parameter. To find equilibrium configurations, we minimized a Landau-deGennes free energy functional using finite element discretization and gradient flow combined with Newtons method. Future results will yield a tensor field that minimizes the free energy functional, which will allow us to reconstruct 3D images of cholesteric liquid crystals in their stable equilibrium configurations.
Epidemic simulations usually track the temporal evolution of a virtual city or community of agents in terms of contracting infection, recovering asymptomatically, or getting hospitalized. However, the computational cost in doing so makes the model infeasible to run in normal settings. We use certain Gompertz functions to reduce the computational cost by making use of the continuity of these class of functions to track how the viral load of each individual agent grows through short or long lived physical proximity without getting into overly detailed book keeping. Moreover, the unique characteristics of Gompertz functions gives us significant leeway in controlling the internal working of the parameters. Our simulation results are consistent with the publicly available hospitalization and ICU patient data from three distinct regions of varying sizes. Our model can also predict the trend in epidemic spread and hospitalization from a set of simple parameters and could be potentially useful in predicting the disease evolution based on available data and observations about public behavior.
One of the most important and challenging problems in coding theory is to determine the optimal values of the parameters of a linear code and to explicitly construct codes with optimal parameters, or as close to the optimal values as possible. The class of quasi-twisted (QT) codes has been very promising in this regard. Over the past few decades various search algorithms to construct QT codes with better parameters have been employed. Most of these algorithms (such as ASR) start by joining constacyclic codes of smaller lengths to obtain QT codes of longer lengths. There has been an algorithm that works in the opposite way that constructs shorter QT codes from long constacyclic codes. We modified and generalized this algorithm and obtained new linear codes via its implementation. We also observe that the new algorithm is related to the ASR algorithm.
The objective of this project is to analyze the way in which we can block the best player of the opposing team in a soccer match without the need for two or more players to defend them. For example, it is not viable for two or more players to defend Lionel Messi since he is very skilled and can easily defeat two players. We use a Markov chain analysis to model the likelihood of a passing chain going from the goalie to a shot on goal with each player and describe one way we could decrease Messi's shots on goal only by decreasing the proportion of successful passes from 2 other players.
In 1735, Leonhard Euler shocked the mathematical community by computing the sum of the series of squares of the natural numbers. Fascinatingly enough, the value of this sum involves the extensively studied number pi. The series considered by Euler has connections to both Riemann's zeta function and the dilogarithm function. Both of these special functions along with the closely related trilogarithm frequently appear in the structure of generating functions involving harmonic numbers. In this talk, an improved representation of the dilogarithm and trilogarithm in terms of their real and imaginary parts will be discussed, along with some special values of these functions.
A graph G is k-mixing if you can get between any two k-vertex-colorings by a sequence of single color switches maintaining a proper k-vertex-coloring at every step. This problem has been previously studied. Here, we introduce the total mixing problem, extending the idea to total coloring. A k-total-coloring of a graph G is an assignment of the colors 1, 2, . . ., k to every vertex and every edge such that no two incident elements are assigned the same color. We say a graph G is k-total-mixing if we can get from one arbitrary k-total-coloring of G to any other by a sequence of single color switches, maintaining a proper k-total-coloring of G at every step. We completely answered this question for trees, showing that a tree T is k-total-mixing if and only if k is greater than or equal to the maximum degree + 2. We also proved several results for cycles, nearly answering this question for that family of graphs.
Probability theory is a branch of measure theory. However, we usually simplified the logical steps and neglect measure theory concepts when we apply probability theory. The Law of Large Numbers is one of the essential building blocks in both probability theory and statistics. This result ensures that a random process can converge to a long-term stable outcome. The Weak Law of Large Numbers (WLLN) and the Strong Law of Large Numbers (SLLN) are two different versions of this idea. This presentation will discuss and compare WLLN and SLLN from a measure theory perspective.
The spring-mass system is an invaluable model with unmatched versatility for studying wave-like physical phenomena or material deformation. We consider linear spring-mass systems and the use of experimental data to locate and characterize defects ("error" masses) somewhere along its length. By taking the Laplace transform of the first bodys trajectory, the system eigenvalues are recoverable if damping is negligibly weak. Encoded within the trace/determinant eigenvalue relations are the masses of the defects; a minimization procedure with the recovered masses can then reveal their locations. For up to two present defects, this scheme is reasonably successful.
As young mathematicians with various marginalized identities, we think a lot about the career-building opportunities we missed out on because we werent in the know. This is often because we didnt come from families with ties to academia, didnt attend the schools that those opportunities are promoted at, don't fit the mold of a "typical" mathematician in the eyes of those who should be looking out for us, or just slipped through the cracks of the educational pipeline in another way. But we know more now than when we started out and we decided we could use what weve learned to help those who come after us. This is how the Online Undergraduate Resource Fair for the Advancement in Academica of Marginalized Mathematicians (OURFA2M2) was born. OURFA2M2 is an online conference started by five undergrad and early grad student organizers to meet the needs of undergrads interested in a future in mathematics research. In this talk, well discuss how we built the conference to meet specific community needs we saw, what we learned in the process, and our advice for aspiring student leaders to identify the needs of their community and to build infrastructure that meets those needs.
In recent years, graphene has gained significant popularity as a building material and energy storage medium with a wide variety of applications. One of graphenes most unique properties is its conductivity, which is enhanced by mass-less fermions that enable loss-less electron transfer across a graphene sheet. Previous researchers [Novoselov (2011)] have conjectured a potential relation between a special type of spectral touching point, Dirac conical points, and the unique properties of graphene, although this has yet to be formally proven. Further research from T. Weyand (2014) and R. Martin (2017) found that variations of graphene also have these touching points. I expand upon this idea by searching for other materials possessing spectral touching points, which may indicate the presence of properties similar to those of graphene. I verify the existence of these touching points by modeling a material as a 2 dimensional infinite periodic graph, the spectrum of which can be found using Floquet-Bloch theory. I find the fundamental domain of this graph, find the corresponding magnetic flux Schrodinger operator, and then take the union over all possible values of magnetic flux. This allows me to graph the infinite spectrum and search for touching points, which indicate repeated eigenvalues. I then verify touching points by calculating the eigenvalues and eigenvectors at probable locations to show that there exist linearly independent eigenvectors for the same eigenvalue. In my presentation, I will explain how, using the methods described above, I found examples of these materials and proved the existence of various types of touching points within them under different symmetry conditions.