Jargon is any word or phrase that loses or changes its meaning when you use it with people who aren’t in your field. It can allow people with shared expertise to communicate complex ideas and concepts precisely and efficiently. But it becomes an obstacle when you want to discuss the work with someone who does not share that expertise, and often, that is the very goal of our science communication.
A few CMU grad students got together and tried to simplify their research descriptions. To make them as simple as possible, we used the rules of the XKCD Thing Explainer: using only the thousand most common English words. As the saying goes, “If you can’t explain it simply, you don’t understand it well enough.” While we lose some of the details and nuance in the simplified versions, the process forces us to distill, and consider new analogies for, our research topics.
Here are our research descriptions after, and before, simplification.
Machines that make things by putting down layer after layer are showing up in more and more places. These machines can make things with new and interesting shapes, but which are still very strong. Within ten years, such machines will change how many things are made.
3D-printed parts are showing up increasingly across many industries. These new printers can create complex structures, with varied strength and material properties. Within ten years, 3D printers will significantly alter processes by which products are designed and fabricated.
My work is about people’s guesses of what kind of cars other people will buy. I found that breaking the cars into kinds in a way that is too simple can lead to bad guesses, which can then lead to leaders making bad choices.
My research is about computer models that predict what kind of car people buy, and the mathematical assumptions used to represent the wide range of types of cars available on the market. I have found that oversimplifying the model can lead to variation in results, which can then result in poor policy decisions.
I’m teaching computers how to find and understand the language that we use to talk about causes and what is caused. This language can be things like “because,” “open the way for,” and “so __ that __” (like “so bad that I left”). Computers have a hard time recognizing language like this because there are so many different forms of it. So I helped the computer to learn how we say things like this by looking at many cases that humans have given the right answers for.
I’m building software systems that can find and interpret language about cause and effect. The kinds of language we use to describe causality range from expressions like “because” to “open the way for” or “so __ that __” (e.g. “so offensive that I left”) so it’s difficult to get a computer to recognize them and find all the causes and effects. I’ve developed systems that learn from many manually annotated text what patterns indicated these cause-and-effect relationships.
People’s brains are made up of many tiny blocks that talk to each other through even more paths. Not only that, there are many kinds of blocks in the brain, which makes it hard to hear and understand what they are saying to each other. So, I am making new ways, using computers, to understand what the blocks are saying and to listen to more blocks at the same time.
The human brain is made up of billions of neurons with even more connections between them. Additionally, there are thousands of different types of neurons, making measuring and studying brain activity a difficult task. To that end, I am applying novel signal processing and machine learning algorithms to simultaneously study large populations of neurons.
The problem of teaching a machine to move around by itself in a place it has never seen before, and which it can see small parts of, is hard. Other machines and people in the area can make it hard for the machine to know where it is and what to do next. So, we have its computer plan where to go in the world it does not know well, using information and guesses about the other machines and people that it can see.
The problem of moving a robot autonomously in an unstructured environment with partially observable variables is difficult, because movement of agents like other robots and humans in the environment can interfere with the robot’s ability to plan, localize, and act. To solve this, we need to plan the robot’s actions in a probabilistic world where the model of the robot dynamically changes based on the actions of other agents.
I write computer language that makes machines walk around outside. The machine then knows, without being able to see, to go around things that it hits.
I develop compliant control algorithms for legged locomoting robots in unstructured environments. These algorithms take as inputs sensor measurements from the joints, and comply to the environment when the robot has collided with obstacles, so that the shape of the robot autonomously adapts and the robot can blindly navigate.
Giving new machine body part, like arms and legs to people who have lost them, can help them move better than they can do by themselves. But, to make them easy to use, we need to make their computers better at guessing what to do when the person using them decides to move. We want to do this so that doctors can help people who are hurt in their back or brain walk more easily.
Robotic leg prostheses and exoskeletons are developed with human assistance in mind, but then the control is based on mathematical models that disregard the future adaptation of the human control system to these devices. So, we are trying to identify a predictive computational model of how humans adapt their motor control to locomotion assistance with robotic devices. Identifying such predictive models can lead to paradigm shifts in the design of controls for assistive devices and of clinical rehabilitation protocols that seek to restore locomotor function in stroke and spinal cord injured patients.