turing-machine

The Simple Problem Machines Can’t Solve

“I do apologize for not being able to satisfy a lot of people’s expectations. I kind of felt powerless,” [1], said GO grandmaster Lee Sedol after a surprising 1-4 loss to the artificial intelligence AlphaGO recently.

Machines had conquered most of the games mankind has created, including chess, Scrabble, and even Jeopardy!.  The ancient game GO, exponentially more complex than chess, was once considered to be one of the ultimate tests of machines’ capabilities. Yet with Lee’s loss, the game has been conquered. Given the rapid advances in artificial intelligence, one cannot help but wonder “Is there any limit to what a machine can do?”

While machines have become smart enough to defeat humans in sophisticated games, humans have cleverly devised a problem that machines definitely cannot solve. Impressively, the problem was constructed more than 80 years ago, even before the birth of digital computers. The star of humanity who came up with this construction was mathematician Kurt Godel. Later, Alan Turing, the father of computer science, used Godel’s techniques to prove an analogous theorem in the context of computer science. In its simplest form, this theorem states that there exist problems that a machine will never be able to conquer. Continue reading

9035_d91fb359652b5c9d9842b11d1c6fada5

On Nautil.us: How Big Data Creates False Confidence

Another SNF-workshopped article on Facts So Romantic, the blog of Nautilus magazine:

If I claimed that Americans have gotten more self-centered lately, you might just chalk me up as a curmudgeon, prone to good-ol’-days whining. But what if I said I could back that claim up by analyzing 150 billion words of text? A few decades ago, evidence on such a scale was a pipe dream. Today, though, 150 billion data points is practically passé. A feverish push for “big data” analysis has swept through biology, linguistics, finance, and every field in between.

But there’s a problem: It’s tempting to think that with such an incredible volume of data behind them, studies relying on big data couldn’t be wrong. But the bigness of the data can imbue the results with a false sense of certainty. Many of them are probably bogus—and the reasons why should give us pause about any research that blindly trusts big data.

Read the whole article on the Nautilus website.

On Nautil.us: The Most Important Object In Computer Graphics History Is This Teapot

From an article reviewed by SNF and posted yesterday on Facts So Romantic, the blog of Nautilus magazine:

Let’s play a game. I’ll show you a picture and a couple videos—just watch the first five seconds or so—and you figure out what they have in common. Ready? Here we go:

8512_e092aed5316b555a770029849e06a7de

Did you spot it? Each of them depicts the exact same object: a shiny, slightly squashed-looking teapot…This unassuming object—the “Utah teapot,” as it’s affectionately known—has had an enormous influence on the history of computing.

Continue reading

Why computer scientists and linguists don’t always see eye-to-eye

Linguists have many theories about how language works. How much should computer scientists who work on language care?

Linguists have many theories about how language works. But how much should the computer scientists who work with language care? (CC image courtesy of Flickr/surrealmuse)

“You’ve just explained my entire life to me.” This was the last thing I was expecting to hear from Lori, my graduate advisor, in the midst of a discussion of my career plans. I gave a stiff smile and shifted uncomfortably in my chair. “What you just said,” she continued, “that’s why I’m here, not in a linguistics department. In a linguistics department in the 80’s, I might have felt like a hypocrite.”

What I’d said hadn’t been a deliberate attempt to enlighten a researcher 30 years my senior. I’d simply mentioned my preference for application-oriented research groups, because I care more about producing useful insights than true theories. Apparently, though, the distinction between usefulness and truth struck a chord with Lori: in the field she and I work in, what’s useful isn’t always true, and what’s true is often not useful. Lori, a linguist in a school of computer science, has found her career path to be largely determined by that distinction.

Continue reading

The Cosmos’ Attack on Computers

In 1984, IBM encountered a mystery: computers in Denver were making ten times more mistakes than the national average. The operators of the computers kept reporting memory errors, but whenever they sent a memory unit back to IBM, the company could find nothing physically wrong. Why wouldn’t computers work properly in Denver?

For several years, the operators had to work around the fact that their computers would occasionally just forget things. It was almost like the computers were high – which, it turned out, was precisely the problem. At 5,280 feet, computers in Denver are much more susceptible to an unlikely culprit: cosmic rays. Continue reading

The Unreasonable Effectiveness of Science

Or Why Effective Theories are so … Effective

Solids, liquids, gases.  What do these words really mean?  How about cell, organ, or human?  At a fundamental level, these things are complex assemblages of interacting subatomic particles.  But you probably have an easy time recognizing a human without knowing about their electron configurations; you might instead identify key characteristics like physical appearance or behavior.  These various abstractions help us understand the macro-world, but a seemingly naive philosophical question is “why can we do that?Continue reading

MOOCs: An Online Education Revolution

Two years ago, Sebastian Thrun, a computer science researcher at Stanford, claimed that in fifty years there would be only ten institutions in the world delivering higher education.1 That’s a pretty ambitious statement, given that there are over 9000 universities across the globe today! Thrun’s confidence in his claim stemmed from his recent work in morphing traditional college lectures into Massive Open Online Courses, also known as MOOCs. Instead of restricting knowledge to the privileged few, these courses aimed to bring the highest levels of teaching and scholarship to students everywhere. In a way, Thrun succeeded: instead of his course on artificial intelligence reaching around 200 university students, he was able to digitally distribute lectures and assignments to 160,000 students across hundreds of countries.

Recently, however, MOOCs have begun falling out of favor. Critics have raised questions about the quality of online classes, and several universities which had originally jumped on the bandwagon have quietly cancelled their plans to move courses online. Thrun now states that “we were on the front pages of newspapers and magazines, and at the same time, I was realizing, we don’t educate people as others wished, or as I wished. We have a lousy product.”2 What caused this abrupt change of opinion? Various academics and teachers have placed blame on a variety of issues, such as high dropout rates and a lack of diversity in student populations, but we can look for answers ourselves by examining the online education of the past.

Continue reading