Welcome to the Association for Women in Mathematics (AWM) Chapter at the University at Buffalo. AWM is a non-profit organization founded in 1971 for the purpose of encouraging women and girls to have active careers in the mathematical sciences, and promoting equal opportunity and the equal treatment of women and girls in the mathematical sciences.
Today, our AWM Chapter aims to support this purpose by providing encouragement and assistance to women at UB in developing their mathematical skills and achieving their professional goals, as well as exciting younger generations about mathematics through community outreach programs.
Our Chapter organizes events such as social gatherings, presentations, and workshops in order to achieve these goals. In the Fall of 2015 we founded the UB AWM Lecture Series to highlight the achievements of women in mathematics and to facilitate discussions about being a woman in the mathematical sciences.
We invite everyone who supports the purpose of AWM to attend our events and to become a member of the Chapter. We emphasize that the Chapter is open to men as well as women, and to both graduate and undergraduate students. Chapter members can (and should) obtain free AWM membership.
If you are interested in becoming a Chapter member or would like more information on upcoming events please click the link below to join our mailing list, ListServ.
Faculty Advisors
Johanna Mangahas, PhD
Sarah F. Muldoon, PhD
UB AWM Chapter Officers
2018-2019
Alyson Bittner, President
Linda Alegria, Treasurer
Chinmayee Rane, Secretary
2017-2018
Kelly Dougan, President
Linda Alegria, Vice President
Tara Hudson, Treasurer
Megan Johnson, Secretary
Chinmayee Rane and Alejandra Garcia, Undergraduate Liaisons
2016-2017
Alyson Bittner, President
Ellyn Sanger, Vice President
Tara Hudson, Treasurer
Megan Johnson, Secretary
2015-2016
Alyson Bittner, President
Ellyn Sanger, Vice President
Tara Hudson, Treasurer
Elizabeth Reid, Secretary
The UBAWM Lecture Series is free and open to the public.
In this talk I will show how data scientists leverage the Facebook infrastructure to better understand people and their connections to each other and the world. This is key to provide a personalized and meaningful experience online. I will present how we develop large scale inferences and graph mining algorithms which scale on the Facebook graph.
Bio: Aude is data science manager in the Core Data Science team, where she leads the Graph & Identity Research team. Her interest lies in the development of novel large scale inferences algorithms and statistical methodologies focusing on large scale graph inferences, graph matching and clustering which can be deployed to improve products across Facebook. Before joining Facebook, she studied for a M. Sc in Applied Math at Ecole Polytechnique and in Transportation at the Ecole des Nationale des Ponts et Chaussées. She obtained her PhD in Electrical Engineering and Computer Science (Controls & Machine Learning) at UC Berkeley.
February 13, 2017
About the Speaker: Dr. Amber Russell, Assistant Professor, Butler University, Mathematics, Actuarial Science, and Statistics, states: "My research area is algebraic representation theory and Lie theory, and I am particularly interested in the use of sheaf theory within this area. More specifically, much of my work relates to the Springer Correspondence and perverse sheaves on the nilpotent cone. My PhD advisor was Pramod Achar, and my current research mentor is William Graham. I also have an ongoing collaboration with Laura Rider."
Learn more.
Abstract: “The Springer Correspondence and Related Topics" In this talk, we will go over the definition of the Springer correspondence and see how perverse sheaves give a proof of it. We will also discuss how it relates to other topics in representation theory, and particularly emphasize its generalization due to Lusztig. I am currently working on a joint project with Martha Precup (North- western) and William Graham (UGA) that gives a new proof of the generalized Springer correspondence in the case of G = SLn(C), and I will talk briefly about this new construction. The talk is aimed to be accessible to an audience having taken a graduate level algebra course.
The propagation of deep-water wa ves, such as ocean swell, are typically modeled by considering water to be an inviscid fluid. Here we discuss the consequences of this inviscid approximation when comparing theoretical predictions to measured experimental and field data. For some situations, dissipation tweaks the comparison. For others, dissipation changes the outcome.
Abstract: A closed, orientable 3-manifold M always has a Heegaard splitting, that is, M^3 can be described as the union of two handlebodies glued together along their boundaries. Gay and Kirby extended this idea to 4-manifolds, showing that any closed orientable 4-manifold M^4 can be described as the union of three 4-dimensional handlebodies, glued together (carefully) along their boundaries. They called this a trisection of M^4. I’ll discuss their result, and describe a natural 4-manifold invariant, L(M^4), that arises from this decomposition. This is joint work with D. Gay and R. Kirby.
As a mathematician at the National Security Agency, I get to do a lot of interesting math. I work in an office called cryptography in systems which develops new cryptography. Although I can't share any of the details of my work, I'm instead going to give an overview of public key cryptography and what it looks like in the past, present and future. I will also talk a little about NSA, what it's like to work there and some job opportunities for mathematicians.
A group of scientists make a new discovery. We can probably all agree that the world should hear about their work, and we can probably also agree that the scientists deserve some recognition. But how does that science make it from the lab bench to the Twitter feed? And what is the likelihood that the information you eventually consume will be intact, interesting, and accurate? As a mathematician with a background in radio, blogging, social media, and podcasting, I will attempt to answer these questions, and more. I will discuss the evolution of a science story from beginning to end, and make a case for quantitative literacy in journalism. And like much click-bait, this talk might not actually blow your mind…but then again, maybe it will. There’s really only one way to find out.
Graphical models have proven to be a valuable tool for connecting genotypes and phenotypes. Structural learning of phenotype-genotype networks has received considerable attention in the post-genome era. In recent years, a dozen different methods have emerged for network inference, which leverage natural variation that arises in certain genetic populations. The structure of the network itself can be used to form hypotheses based on the inferred direct and indirect network relationships, but represents a premature endpoint to the graphical analyses. In this work, we extend this endpoint. We examine the unexplored problem of perturbing a given network structure, and quantifying the system-wide effects on the network in a node-wise manner. We leverage belief propagation methods in Conditional Gaussian Bayesian Networks (CG-BNs), in order to absorb and propagate phenotypic evidence through the network. We show that the modeling assumptions adopted for genotype-phenotype networks represent an important sub-class of CGBNs, which possess properties that ensure exact inference in the propagation scheme. Applications to kidney and skin cancer expression Quantitative Trait Loci (eQTL) data from different musculus populations are presented. We demonstrate how these predicted system-wide effects can be examined in connection with estimated class probabilities for covariates of interest, e.g., cancer status. Despite the uncertainty in the network structure, we demonstrate the system-wide predictions are stable across an ensemble of highly likely networks. A software package, geneNetBP, which implements our approach, was developed in the R programming language.
When working with data, it is imperative to have a definition of a distance in order to rigorously define concepts such as noise and approximations. The interleaving distance was first defined in the context of generalizing the bottleneck distance for persistence diagrams, a common tool from topological data analysis (TDA) by Chazal et al. It was then shown to fit easily into the categorified framework of persistence modules provided by Bubenik and Scott. Category theory allows the idea to be fluidly moved and redefined to work on many other constructions. In particular, some standard metrics such as L∞ and Hausdorff distance, can also be viewed as special cases of the interleaving distance. In this talk, we will discuss the basic ideas for the interleaving, its generalized definition in the category theory language, and recent work extending this idea to give interleavings on Reeb graphs and Merge trees. This work is joint with Vin de Silva, Amit Patel, and Anastasios Stefanou.
In the past decade, the field of neuroscience has benefited from myriad experimental advances, allowing researchers to explore the brain at across multiple spatial and temporal scales with ever increasing resolution and sensitivity. Along with this wave of new information has come a need to develop novel methods and techniques for visualizing, quantifying, and comparing data across scales and modalities. Network theory – the art of mapping physical systems to mathematical graphs – provides an attractive methodology to study the brain, but also requires the development of new techniques to (i) identify network nodes and connections from large data sets and (ii) measure and interpret subtle differences in topological features of brain networks. In this talk, I’ll discuss my journey from math to physics to neuroscience (and back to math!) research and how quantitative researchers are helping to drive new areas of brain science. Finally, I will highlight some of the different types of data sets that are studied in network neuroscience and describe specific projects my group is working on to extract and measure network topology from diverse modalities ranging from single neuron imaging to whole brain fMRI datasets.