Brave New World

How Rollins researchers are embracing evolving technologies, from artificial intelligence to data apps, to combat biases, improve health equity, and transform public health.  

Artificial intelligence (AI) and technology are rapidly transforming how we create and process data. AI programs such as ChatGPT, a computer program that simulates conversation with people, are changing the landscape across different disciplines and industries. In the realm of public health, AI is providing new ways to remove biases, explore personalized medicine and targeted interventions, and dig deeper into causality. It requires researchers to move quickly yet thoughtfully and consciously. While AI can open doors to understanding bias in at-risk communities, it can also create significant ethical dilemmas.

an abstract technology pattern background image
a photo of a white man standing in an empty classroom.

Robert Krafty, PhD, chair of the Department of Biostatistics and Bioinformatics

Robert Krafty, PhD, chair of the Department of Biostatistics and Bioinformatics

a photo of an older white man standing in an empty hallway.

Lance Waller, PhD, professor of biostatistics and bioinformatics 

Lance Waller, PhD, professor of biostatistics and bioinformatics 

"When we talk about AI and how it's used, especially in public health, there are tremendous positives to it, but there are also lots of low-hanging dangers that can come from it," says Robert Krafty, PhD, chair of the Department of Biostatistics and Bioinformatics. Among these dangers: not accounting for racial bias and blindly trusting patterns in the data.

Training is key to avoiding such pitfalls, he notes. Rollins prepares students to work in every facet of public health research and practice. New graduates, regardless of their focus, will need to "have a fundamental understanding of artificial intelligence and technology so that when they read the literature or think about how best to use evidence-based research to help their populations, they're doing it from an informed perspective,” adds Krafty. 

That's where Emory University's AI.Humanity Initiative comes in. This initiative focuses on the impact of AI on society as it integrates deeper into our lives to examine fairness, bias, ethical issues, consequences, and more. The initiative aims to inspire greater collaboration between departments and function as a recruiting tool for AI experts to increase learning and innovation.

As part of this initiative, Emory recently announced the Center for Artificial Intelligence Learning to promote AI literacy across campus. Beginning this fall, the center will offer year-round courses, workshops, and speaker visits to cover general AI literacy, data visualization, neural networks, cloud computing, and more. 

Lance Waller, PhD, the center's co-leader and professor of biostatistics and bioinformatics, believes the center will push forward Emory's AI.Humanity Initiative to answer the question, "How does AI integrate into life or humanity?"

"It will help us to not only invest in the development of new algorithms, but to also understand how they can be applied in a range of settings, such as research, teaching, and service,” says Waller. “It will foster collaborations among researchers in the sciences, social sciences, and the humanities, including the Emory Center for Ethics, as we consider the impact of AI tools on humanity."

"When we talk about AI and how it's used, especially in public health, there are tremendous positives to it, but there are also lots of low-hanging dangers that can come from it," says Robert Krafty, PhD, chair of he Department of Biostatistics and Bioinformatics. Among these dangers: not accounting for racial bias and blindly trusting patterns in the data.

a photo of a white man standing in an empty classroom.

Robert Krafty, PhD, chair of the Department of Biostatistics and Bioinformatics

Robert Krafty, PhD, chair of the Department of Biostatistics and Bioinformatics

Training is key to avoiding such pitfalls, he notes. Rollins prepares students to work in every facet of public health research and practice. New graduates, regardless of their focus, will need to "have a fundamental understanding of artificial intelligence and technology so that when they read the literature or think about how best to use evidence-based research to help their populations, they're doing it from an informed perspective,” adds Krafty. 

That's where Emory University's AI.Humanity Initiative comes in. This initiative focuses on the impact of AI on society as it integrates deeper into our lives to examine fairness, bias, ethical issues, consequences, and more. The initiative aims to inspire greater collaboration between departments and function as a recruiting tool for AI experts to increase learning and innovation.

As part of this initiative, Emory recently announced the Center for Artificial Intelligence Learning to promote AI literacy across campus. Beginning this fall, the center will offer year-round courses, workshops, and speaker visits to cover general AI literacy, data visualization, neural networks, cloud computing, and more. 

a photo of an older white man standing in an empty hallway.

Lance Waller, PhD, professor of biostatistics and bioinformatics 

Lance Waller, PhD, professor of biostatistics and bioinformatics 

Lance Waller, PhD, the center's co-leader and professor of biostatistics and bioinformatics, believes the center will push forward Emory's AI.Humanity Initiative to answer the question, "How does AI integrate into life or humanity?"

"It will help us to not only invest in the development of new algorithms, but to also understand how they can be applied in a range of settings, such as research, teaching, and service,” says Waller. “It will foster collaborations among researchers in the sciences, social sciences, and the humanities, including the Emory Center for Ethics, as we consider the impact of AI tools on humanity."

an abstract technology pattern background image

Looking deeper into randomized trials

With AI in its infancy, researchers are proceeding cautiously to understand the data AI-generated modeling produces. One of the big issues then surrounds the question of how AI will be used. Will researchers use it to find patterns they want to see and act on those? Or will they be more thoughtful and use AI to address the harder questions related to underlying causes and structures, as well as avoiding inherent biases? 

Hui Shao, PhD, associate professor of global health, is applying causal AI and machine learning methods in his precision medicine and public health research related to diabetes and multimorbidity.  

Shao pointed out that randomized clinical trials were designed to answer certain questions, but the data itself has a lot of value past those questions and remains largely unexplored. Shao is revisiting those trials and applying machine learning algorithms to provide more insight into how treatment effects vary across different population subgroups with diabetes and other health conditions.

"It's very easy for researchers to ignore some important subgroups that are not included as part of the protocol or part of traditional groups, like what if people with higher A1C (an average blood sugar level test to determine diabetes) respond better to the treatment? What if people with early signs of kidney failure respond worse to the treatment or are proven to have a higher risk for adverse events? Those modulating indicators are often ignored from the original protocol and not included as part of the trial standardized analysis. But those signals can be picked up by the AI," says Shao. 

By acquiring data from causal AI algorithms, Shao aims to develop more precise treatment plans and strategies to maximize health outputs and avoid unnecessary adverse effects. These trials include disease prevention for diabetes or hypertension, lifestyle interventions, and drug trials. With a solid foundation in medicine and pharmacoepidemiology, Shao possesses a keen eye for discerning supplementary data that could be extrapolated from various clinical factors. 

Shao shares that kidney function, blood pressure, heart rate, BMI, and glucose levels can often serve as strong indicators for treatment responses. Those are often ignored from trials, but based on these further AI insights, we can tailor how the traditional practice in the clinic works to maximize health benefits and minimize harm."

Shao and his team presented at the American Diabetes Association about the type 2 diabetes drug, Albiglutide. By reanalyzing randomized clinical trials and processing the data with AI, they found that liver function is closely tied to whether a patient will respond to the treatment.

"Patients should have an unimpaired liver function to benefit from Albiglutide,” says Shao. “If a person has early signs of non-alcoholic fatty liver disease, Albiglutide will not benefit them in terms of cardiovascular disease prevention."

Shao shares that using AI helps uncover patterns in large amounts of data, especially in those randomized clinical trials. Still, he’s careful when reviewing all results. “AI-generated patterns should be cross-referenced with clinical content to determine if they have a solid foundation in pathology or physiology, as this would enhance the scientific validity of the findings,” says Shao. “Nonetheless, there may be instances where the results, although statistically significant, do not provide meaningful or interpretable insights, which could potentially compromise the study's credibility”

It’s a thin line to walk to make sure the information is being used and interpreted correctly, but when the patterns are found valid, they can save lives and improve the outcomes of clinical practice. For example, Shao’s team put AI data work in a real-world scenario — providing prescription suggestions, leveraging their clinical drug trial work in collaboration with Tulane University. Every time they found how the heterogeneous treatment varied across patient subgroups, they fed the data to their AI algorithm, hoping that it would later be used as a reference for clinicians when prescribing treatments. 

One of those algorithms is the BRAVO (Building, Relating, Assessing, and Validating Outcomes) diabetes model, a machine-learning-based microsimulation model that accurately predicts diabetes comorbidities and complications. After collecting a large amount of population data, the AI model simulates the progression of diabetes with set equations over a period of time. 

"While I'm very excited about what the capabilities of AI has, when it's attached to the practice, when the patient is on the other side of the table, we need to proceed very cautiously and make sure we don't over-interpret what the AI can bring us," he says.

a photo of an asian man standing in front of an abstract wall sculpture.

With AI in its infancy, researchers are proceeding cautiously to understand the data AI-generated modeling produces. One of the big issues then surrounds the question of how AI will be used. Will researchers use it to find patterns they want to see and act on those? Or will they be more thoughtful and use AI to address the harder questions related to underlying causes and structures, as well as avoiding inherent biases? 

Hui Shao, PhD, associate professor of global health, is applying causal AI and machine learning methods in his precision medicine and public health research related to diabetes and multimorbidity.  

Shao pointed out that randomized clinical trials were designed to answer certain questions, but the data itself has a lot of value past those questions and remains largely unexplored. Shao is revisiting those trials and applying machine learning algorithms to provide more insight into how treatment effects vary across different population subgroups with diabetes and other health conditions.

a photo of an asian man standing in front of an abstract wall sculpture.

Hui Shao, PhD, associate professor of global health

Hui Shao, PhD, associate professor of global health

"It's very easy for researchers to ignore some important subgroups that are not included as part of the protocol or part of traditional groups, like what if people with higher A1C (an average blood sugar level test to determine diabetes) respond better to the treatment? What if people with early signs of kidney failure respond worse to the treatment or are proven to have a higher risk for adverse events? Those modulating indicators are often ignored from the original protocol and not included as part of the trial standardized analysis. But those signals can be picked up by the AI," says Shao. 

By acquiring data from causal AI algorithms, Shao aims to develop more precise treatment plans and strategies to maximize health outputs and avoid unnecessary adverse effects. These trials include disease prevention for diabetes or hypertension, lifestyle interventions, and drug trials. With a solid foundation in medicine and pharmacoepidemiology, Shao possesses a keen eye for discerning supplementary data that could be extrapolated from various clinical factors. 

Shao shares that kidney function, blood pressure, heart rate, BMI, and glucose levels can often serve as strong indicators for treatment responses. Those are often ignored from trials, but based on these further AI insights, we can tailor how the traditional practice in the clinic works to maximize health benefits and minimize harm."

Shao and his team presented at the American Diabetes Association about the type 2 diabetes drug, Albiglutide. By reanalyzing randomized clinical trials and processing the data with AI, they found that liver function is closely tied to whether a patient will respond to the treatment.

"Patients should have an unimpaired liver function to benefit from Albiglutide,” says Shao. “If a person has early signs of non-alcoholic fatty liver disease, Albiglutide will not benefit them in terms of cardiovascular disease prevention."

Shao shares that using AI helps uncover patterns in large amounts of data, especially in those randomized clinical trials. Still, he’s careful when reviewing all results. “AI-generated patterns should be cross-referenced with clinical content to determine if they have a solid foundation in pathology or physiology, as this would enhance the scientific validity of the findings,” says Shao. “Nonetheless, there may be instances where the results, although statistically significant, do not provide meaningful or interpretable insights, which could potentially compromise the study's credibility”

It’s a thin line to walk to make sure the information is being used and interpreted correctly, but when the patterns are found valid, they can save lives and improve the outcomes of clinical practice. For example, Shao’s team put AI data work in a real-world scenario — providing prescription suggestions, leveraging their clinical drug trial work in collaboration with Tulane University. Every time they found how the heterogeneous treatment varied across patient subgroups, they fed the data to their AI algorithm, hoping that it would later be used as a reference for clinicians when prescribing treatments. 

One of those algorithms is the BRAVO (Building, Relating, Assessing, and Validating Outcomes) diabetes model, a machine-learning-based microsimulation model that accurately predicts diabetes comorbidities and complications. After collecting a large amount of population data, the AI model simulates the progression of diabetes with set equations over a period of time. 

"While I'm very excited about what the capabilities of AI has, when it's attached to the practice, when the patient is on the other side of the table, we need to proceed very cautiously and make sure we don't over-interpret what the AI can bring us," he says.

an abstract technology pattern background image

Understanding biases in AI data—and how to eliminate it

Razieh Nabi, PhD, Rollins Assistant Professor of biostatistics and bioinformatics, is developing causal methods to make better data-driven treatment and policy decisions. Focusing on causal inference, Nabi works on developing novel methodologies to understand cause and effect relationships when a randomized control trial can’t happen due to costs or other reasons. 

Drawing insights from machine learning, statistics, and AI, Nabi thinks about counterfactuals, like what if the patient had taken a different treatment or started the treatment earlier, to draw causal conclusions from observational data. She attempts to understand the consequences of hypothetical interventions to quantify the causal effect of the exposure under study on the outcome of interest.

Nabi also uses causal inference to better understand different sources of bias in data, such as confounding bias, bias due to informative censoring and missing data, and discriminatory biases reflected in data due to historical patterns of injustice and inequality in our society.  

Decision-making is what differentiates causal inference from predictive modeling, says Nabi.

As an example, consider the ChatGPT tool which tries to learn patterns in data and mimic them. It relies on a humongous amount of text data and tries to synthesize it and generate a set of new paragraphs based on spurious correlations, but the output is not necessarily factually correct. 

"The problem with mimicking patterns in data is that they are not robust enough to tolerate perturbations (variations) or external interventions, and this is because such patterns do not account for confounding factors," says Nabi. 

Machine learning and AI operate under the assumption that there will be no changes to the input and the environment will always be the same. But when this assumption is broken, machine learning and these predictive models make mistakes.

For example, in medical imaging, AI often performs better predictions than humans when identifying whether a skin lesion is cancerous. But, once a disruption is introduced into the model—i.e., the imaging angle changes, or the background is darker—AI makes a mistake as it cannot adapt quickly.

In her research, Nabi is working on predictors, that are not just correlated with the health outcome but cause the outcome. In essence, she’s working to tease apart spurious correlations from causation and find the best plausible way to quantify how much the outcome would change if these features changed in a cause-effect sense.

"This is really the core objective of causal inference—trying to understand the consequences of interventions or think about these counterfactual scenarios—and trying to quantify the relation between the outcomes and the treatments that we're interested in," says Nabi. "That's very different from what AI predictive models do, which focus on finding patterns. Both are useful, but we have to be careful about how we are going to use them and in what settings."

How then can AI be used to benefit public health? At the intersection of AI and causal inference is figuring out some of the underlying factors that bias our findings, and how to resolve them. Such steps are particularly important in health care, especially those affecting under-represented minorities.

"Despite the illusion of objectivity in algorithms, they rely on humans in every step of their development, from data collection to how methods are being deployed and used in practice as policies," says Nabi. "The patterns that algorithms see will be the patterns of discrimination we've seen in society, so AI is going to learn that and reintroduce it."

To avoid this, Nabi strives to ensure that these algorithms respect fairness norms. She first thinks about what it means for an algorithm to be fair concerning a sensitive attribute and an outcome that interests her and then forces it to respect it. "AI has a potential, and we've seen the potential, but we just have to be careful in how we unlock it," she says. 

a photo of a south asian woman standing in front of an abstract wall sculpture.

Razieh Nabi, PhD, Rollins Assistant Professor of biostatistics and bioinformatics, is developing causal methods to make better data-driven treatment and policy decisions. Focusing on causal inference, Nabi works on developing novel methodologies to understand cause and effect relationships when a randomized control trial can’t happen due to costs or other reasons. 

Drawing insights from machine learning, statistics, and AI, Nabi thinks about counterfactuals, like what if the patient had taken a different treatment or started the treatment earlier, to draw causal conclusions from observational data. She attempts to understand the consequences of hypothetical interventions to quantify the causal effect of the exposure under study on the outcome of interest.

Nabi also uses causal inference to better understand different sources of bias in data, such as confounding bias, bias due to informative censoring and missing data, and discriminatory biases reflected in data due to historical patterns of injustice and inequality in our society.  

a photo of a south asian woman standing in front of an abstract wall sculpture.

Razieh Nabi, PhD, Rollins Assistant Professor of biostatistics and bioinformatics

Razieh Nabi, PhD, Rollins Assistant Professor of biostatistics and bioinformatics

Decision-making is what differentiates causal inference from predictive modeling, says Nabi.

As an example, consider the ChatGPT tool which tries to learn patterns in data and mimic them. It relies on a humongous amount of text data and tries to synthesize it and generate a set of new paragraphs based on spurious correlations, but the output is not necessarily factually correct. 

"The problem with mimicking patterns in data is that they are not robust enough to tolerate perturbations (variations) or external interventions, and this is because such patterns do not account for confounding factors," says Nabi. 

Machine learning and AI operate under the assumption that there will be no changes to the input and the environment will always be the same. But when this assumption is broken, machine learning and these predictive models make mistakes.

For example, in medical imaging, AI often performs better predictions than humans when identifying whether a skin lesion is cancerous. But, once a disruption is introduced into the model—i.e., the imaging angle changes, or the background is darker—AI makes a mistake as it cannot adapt quickly.

In her research, Nabi is working on predictors, that are not just correlated with the health outcome but cause the outcome. In essence, she’s working to tease apart spurious correlations from causation and find the best plausible way to quantify how much the outcome would change if these features changed in a cause-effect sense.

"This is really the core objective of causal inference—trying to understand the consequences of interventions or think about these counterfactual scenarios—and trying to quantify the relation between the outcomes and the treatments that we're interested in," says Nabi. "That's very different from what AI predictive models do, which focus on finding patterns. Both are useful, but we have to be careful about how we are going to use them and in what settings."

How then can AI be used to benefit public health? At the intersection of AI and causal inference is figuring out some of the underlying factors that bias our findings, and how to resolve them. Such steps are particularly important in health care, especially those affecting under-represented minorities.

"Despite the illusion of objectivity in algorithms, they rely on humans in every step of their development, from data collection to how methods are being deployed and used in practice as policies," says Nabi. "The patterns that algorithms see will be the patterns of discrimination we've seen in society, so AI is going to learn that and reintroduce it."

To avoid this, Nabi strives to ensure that these algorithms respect fairness norms. She first thinks about what it means for an algorithm to be fair concerning a sensitive attribute and an outcome that interests her and then forces it to respect it. "AI has a potential, and we've seen the potential, but we just have to be careful in how we unlock it," she says.

an abstract technology pattern background image

Integrating with existing data apps for quicker adoption and scalability

In recent years, research has focused on creating apps from scratch that improve scheduling with providers and screenings or create portals to access and order prevention goods, like condoms. But Aaron Siegler, PhD, associate professor of epidemiology, shares that these types of projects often aren’t viable past the research stage or applicable in the real world. 

"If you demonstrate in a clinical trial that an app works, what's next? Are you going to start your own business? Maybe you hope an app owner reads your research. It's hard," he says. "If we build technology outside of existing systems, it takes a lot of work to bring it into those systems later."

Siegler is trying a new approach by leading a technology-driven clinical trial to prevent HIV in China. Funded by the National Institutes of Health, the trial is testing use of a popular existing app to scale up pre-exposure prophylaxis (PrEP) intervention services. PrEP is a medication used to prevent HIV; when taken as prescribed, it reduces the risk of contracting HIV from sex by about 99 percent.

Siegler is piloting the intervention in partnership with the developers of Blued, a gay dating and social networking app with more than 12 million monthly active users in China. Through the pilot, the app offers a health portal for users to access PrEP intervention services and order preventive care items such as HIV tests, condoms, and lubricants.

The predominant health outcome, in this case, is not general use but the more specific use of telemedicine visits. Users can see a clinician virtually, go to a local lab for HIV testing, and receive PrEP prescriptions by mail. 

Siegler notes that with a user base and business model in place, there's a more direct pathway to scalability. Trust and access are two main factors in this clinical trial as well.

"People have been using an app for years that they trust. Part of the concept for this trial is is working with something that already exists and reaching people where they are," he says. 

Users already spend a certain number of hours on their smartphones every day. Siegler’s approach provides them access to effective prevention interventions in a space where they're already comfortable and present. Early clinical trial data shows promising usage and early uptake of PrEP services. 

“If we can develop the right system for more collaboration between companies with health apps and researchers with the ability to build interventions and test them in clinical trials to understand the exact impact on public health—that’s huge," says Siegler.

Krafty shares that AI and technology, like apps, can help us in two ways: at a macro population level, to understand where we need to shift resources and initiatives as we think about overlooked populations. In the same vein as personalized medicine, AI can also aid the spread of personalized public health. "We can tailor things to individuals and specific subpopulations. So, we can think about the good of the public, but by tackling each individual," explains Krafty. 

In his research, Krafty uses AI to address at-risk populations before they have clinical problems, specifically in adults ages 65 and older who are recently bereaved. With the help of wearable devices, he is collecting data on their moods and daily activities to monitor for major depressive episodes. Accounting for both observational and missing data, Krafty uses AI to process the data, see patterns to prevent users from having a major health event, and administer treatment. 

On a larger scale, artificial intelligence is saving lives and improving health care for millions of people. “AI is making hospitals run more efficiently,” says Nabi. “It’s helping clinicians make decisions with more confidence by providing them with powerful tools to automate certain tasks and support and inform them. And it’s helping patients by personalizing treatments and improving delivery of care." 

When Emory’s Center for Artificial Intelligence Learning opens this fall, it will aid in these powerful endeavors by advancing the use of AI to solve problems ethically and drive research to improve and protect the health of patients and different populations, locally and globally.

Story by  Muriel Vega
Designed by Linda Dobson
Animation by Charlie Layton
Photography by Erik Meadows

a photo of a white man standing in front of an abstract wall sculpture.

In recent years, research has focused on creating apps from scratch that improve scheduling with providers and screenings or create portals to access and order prevention goods, like condoms. But Aaron Siegler, PhD, associate professor of epidemiology, shares that these types of projects often aren’t viable past the research stage or applicable in the real world. 

"If you demonstrate in a clinical trial that an app works, what's next? Are you going to start your own business? Maybe you hope an app owner reads your research. It's hard," he says. "If we build technology outside of existing systems, it takes a lot of work to bring it into those systems later."

Siegler is trying a new approach by leading a technology-driven clinical trial to prevent HIV in China. Funded by the National Institutes of Health, the trial is testing use of a popular existing app to scale up pre-exposure prophylaxis (PrEP) intervention services. PrEP is a medication used to prevent HIV; when taken as prescribed, it reduces the risk of contracting HIV from sex by about 99 percent.

a photo of a white man standing in front of an abstract wall sculpture.

Aaron Siegler, PhD, associate professor of epidemiology

Aaron Siegler, PhD, associate professor of epidemiology

Siegler is piloting the intervention in partnership with the developers of Blued, a gay dating and social networking app with more than 12 million monthly active users in China. Through the pilot, the app offers a health portal for users to access PrEP intervention services and order preventive care items such as HIV tests, condoms, and lubricants.

The predominant health outcome, in this case, is not general use but the more specific use of telemedicine visits. Users can see a clinician virtually, go to a local lab for HIV testing, and receive PrEP prescriptions by mail. 

Siegler notes that with a user base and business model in place, there's a more direct pathway to scalability. Trust and access are two main factors in this clinical trial as well.

"People have been using an app for years that they trust. Part of the concept for this trial is is working with something that already exists and reaching people where they are," he says. 

Users already spend a certain number of hours on their smartphones every day. Siegler’s approach provides them access to effective prevention interventions in a space where they're already comfortable and present. Early clinical trial data shows promising usage and early uptake of PrEP services. 

“If we can develop the right system for more collaboration between companies with health apps and researchers with the ability to build interventions and test them in clinical trials to understand the exact impact on public health—that’s huge," says Siegler.

Krafty shares that AI and technology, like apps, can help us in two ways: at a macro population level, to understand where we need to shift resources and initiatives as we think about overlooked populations. In the same vein as personalized medicine, AI can also aid the spread of personalized public health. "We can tailor things to individuals and specific subpopulations. So, we can think about the good of the public, but by tackling each individual," explains Krafty. 

In his research, Krafty uses AI to address at-risk populations before they have clinical problems, specifically in adults ages 65 and older who are recently bereaved. With the help of wearable devices, he is collecting data on their moods and daily activities to monitor for major depressive episodes. Accounting for both observational and missing data, Krafty uses AI to process the data, see patterns to prevent users from having a major health event, and administer treatment. 

On a larger scale, artificial intelligence is saving lives and improving health care for millions of people. “AI is making hospitals run more efficiently,” says Nabi. “It’s helping clinicians make decisions with more confidence by providing them with powerful tools to automate certain tasks and support and inform them. And it’s helping patients by personalizing treatments and improving delivery of care." 

When Emory’s Center for Artificial Intelligence Learning opens this fall, it will aid in these powerful endeavors by advancing the use of AI to solve problems ethically and drive research to improve and protect the health of patients and different populations, locally and globally.

Story by  Muriel Vega
Designed by Linda Dobson
Animation by Charlie Layton
Photography by Erik Meadows

an abstract technology pattern background image

Want to know more? Please visit

Rollins Magazine
Emory News Center
Emory University

An abstract dark blue background image.