From an article workshopped by SNF and recently featured in the newsletter of the Society for Industrial and Applied Mathematics:
Every researcher is also part writer. It’s a label that may be unfamiliar or even unwelcome to many graduate students, professors, and industry scientists. But between grants, papers, and reports to higher-ups, writing is undeniably a huge part of research.
Yet somehow, even with all that practice, the thought of writing for a mass-market magazine or news site can seem like a leap into a world so foreign that it’s unapproachable. The apparent chasm between us and a broader audience is further widened by the mathematical intensiveness of our work.
After all, what layperson wants to read about math? Thousands, it turns out, with appropriate translation, and the barriers to reaching them are lower than you might think. Over the course of my Ph.D. in computer science at Carnegie Mellon University (CMU), I’ve been increasingly drawn to science writing, culminating in an American Association for the Advancement of Science (AAAS) Mass Media Fellowship this past summer at Scientific American. I’ve found the most daunting obstacles to be largely illusory, vanishing as soon as I was nudged into confronting them. And not only was my background not an impediment, it proved to be an unexpected boon; my mathematical training opened up otherwise impenetrable stories to me — and to thousands of readers by extension.
Check out the full piece on the SIAM website.
Another SNF-workshopped article on the Popular Mechanics blog:
It’s the bane of every web surfer, the internet’s version of fingernails on the chalkboard. Click almost any link that dates back to pre-2005 and brace for the inevitable: “HTTP 404 Not Found.”
Anyone who’s spent time near an internet connection is familiar with the 404 error, a webserver’s way of saying you’ve reached a dead end. What’s less well known is that this very error is what allowed the World Wide Web to exist in the first place.
Read the whole article on the Popular Mechanics website.
From an article reviewed by SNF and posted yesterday on Scientific American’s guest blog:
The cleverest card trick I’ve ever seen was performed not by a magician, but by a math professor.
A teaching assistant (let’s call him Nick), acting as magician’s assistant, recruited five student participants. Each student picked a card from a 52-card deck and handed it back to Nick, face up but invisible to Tom, the professor. Nick laid out four of these cards in front of Tom. To our astonishment, Tom immediately identified the missing fifth card.
The professor revealed the trick at the end of class. But when I came back to my dorm, bursting with excitement, my suitemate Benjamin refused to let me explain it; he had to figure this out on his own. He wandered off to his room muttering to himself, blissfully unaware that within twenty-four hours, this puzzle would prove disastrous to his dignity.
Read the whole article on the Scientific American website.
Another SNF-workshopped article on Facts So Romantic, the blog of Nautilus magazine:
If I claimed that Americans have gotten more self-centered lately, you might just chalk me up as a curmudgeon, prone to good-ol’-days whining. But what if I said I could back that claim up by analyzing 150 billion words of text? A few decades ago, evidence on such a scale was a pipe dream. Today, though, 150 billion data points is practically passé. A feverish push for “big data” analysis has swept through biology, linguistics, finance, and every field in between.
But there’s a problem: It’s tempting to think that with such an incredible volume of data behind them, studies relying on big data couldn’t be wrong. But the bigness of the data can imbue the results with a false sense of certainty. Many of them are probably bogus—and the reasons why should give us pause about any research that blindly trusts big data.
Read the whole article on the Nautilus website.
From an article reviewed by SNF and posted yesterday on Facts So Romantic, the blog of Nautilus magazine:
Let’s play a game. I’ll show you a picture and a couple videos—just watch the first five seconds or so—and you figure out what they have in common. Ready? Here we go:
Did you spot it? Each of them depicts the exact same object: a shiny, slightly squashed-looking teapot…This unassuming object—the “Utah teapot,” as it’s affectionately known—has had an enormous influence on the history of computing.
Do the laws of physics know their left from their right? (Image adapted from Dean Hochman)
Can you tell your left from your right?
When I was three years old, I took significant pride in the fact that I could – particularly the day I discovered that many of my fellow preschoolers had not yet achieved this feat. Of course, as the years wore on and my classmates got the hang of it, I came to take the distinction for granted. It was so…pedestrian. Trivial, even.
And then a college professor showed up and left me so confused about left and right that I couldn’t fathom how anyone could rightly know which was which. Continue reading
Linguists have many theories about how language works. But how much should the computer scientists who work with language care? (CC image courtesy of Flickr/surrealmuse)
“You’ve just explained my entire life to me.” This was the last thing I was expecting to hear from Lori, my graduate advisor, in the midst of a discussion of my career plans. I gave a stiff smile and shifted uncomfortably in my chair. “What you just said,” she continued, “that’s why I’m here, not in a linguistics department. In a linguistics department in the 80’s, I might have felt like a hypocrite.”
What I’d said hadn’t been a deliberate attempt to enlighten a researcher 30 years my senior. I’d simply mentioned my preference for application-oriented research groups, because I care more about producing useful insights than true theories. Apparently, though, the distinction between usefulness and truth struck a chord with Lori: in the field she and I work in, what’s useful isn’t always true, and what’s true is often not useful. Lori, a linguist in a school of computer science, has found her career path to be largely determined by that distinction.
Nah, we’re all too busy worrying about whether we’re experts in our field. (Source: xkcd)
When my advisor informed her assembled advisees that I was the group’s “machine learning expert,” I nearly choked. I thought I had a pretty good idea of what expertise looked like. An expert possesses a deep, intuitive understanding of his or her subject. An expert exudes confidence in his or her abilities and reputation. An expert fields detailed questions without batting an eyelid. What an expert most certainly does not look like, I thought, is a clueless amateur of a Ph.D. student.
My lofty image of expertise was not my own invention – our society has an unhealthy tendency to fetishize experts. We see the degree of knowledge possessed by professors and analysts and TED speakers as almost mystical. We speak in awed whispers of their brilliance and intuition. And of course, the praise is often well-deserved; I don’t mean to suggest that there is no such thing as expertise. But the way we idolize experts does great damage to experts and novices alike. Continue reading
In 1984, IBM encountered a mystery: computers in Denver were making ten times more unexplained mistakes than the national average. The operators of the computers kept reporting memory errors, but whenever they sent a memory unit back to IBM, the company could find nothing physically wrong. Why wouldn’t computers work properly in Denver?
For several years, the operators had to work around the fact that their computers would occasionally just forget things. It was almost like the computers were high – which, it turned out, was precisely the problem. At 5,280 feet, computers in Denver are much more susceptible to an unlikely culprit: cosmic rays. Continue reading
Everything in the world works by processing information. Even thermoses.
There’s an old joke about three construction workers arguing about the greatest invention ever. The first worker nominates the telephone: “Now we can hear people from miles away!” The second worker points out that with television we can see them, too. “No,” the third worker insists, “you’re both wrong. The greatest invention is the thermos.”
“Sure. On a cold day, it keeps my soup hot. On a warm day, it keeps my lemonade cold.”
“How does it know??”
Laughable though it is, this question is surprisingly insightful from the perspective of computer science. Computation is all about what a system knows and what it can learn from that: how can your laptop, armed with just a wireless radio, find out what www.google.com looks like? How can a security camera infer what’s in an image from a blurry bunch of pixels? Almost every classic problem in computer science amounts to some form of manipulating information.