I am a PhD Candidate in Computer Science at Rensselaer Polytechnic Institute, where I am also a Research Assistant in the Rensselaer AI and Reasoning (RAIR) Lab. I am simultaneously pursing a MS in Cognitive Science, and graduated with a double Bachelor’s Degree in Computer Science and Mathematical Sciences from Worcester Polytechnic Institute in 2018.
This summer I returned to Amazon as an Applied Scientist Intern, this time in the Amazon Lex team, where I’m researching methods for understanding entailment in indirect speech acts. More specifically, the goal of the project is to enable dialog agents to identify implicit intents from user utterances.
I believe it is essential that AI agents are able to provide justification and verification for their actions/decisions. That is, AI agents should be able to justify their decisions and verify that the decision is correct, follows a moral/legal code, etc., (most likely) by providing a proof or argument.
To that end, my research interests include automated reasoning, hybrid AI, and artificial general intelligence (AGI). My current research is focused on qualitative uncertainty (as contrasted with quantitative uncertainty). That is, I’m interested in creating AI agents which can express and reason with uncertain beliefs in a cognitively-plausible way (e.g. “I believe it is beyond reasonable doubt that formula phi holds.”), and update their beliefs when they receive new information (which may be inconsistent with prior beliefs).
Contact & More Information
For more information about projects I’ve worked on, visit the links at the top of this page. The best way to contact me is via email.