10 shows to binge watch during your long holiday weekend

http://ift.tt/2bNQNbj

It’s Labor Day weekend, which for most of us means a nice, three days off from work. It’s the perfect opportunity to catch up on some of the shows that you’ve probably been hearing about from friends and co-workers.

Here’s ten shows to watch this long holiday weekend on before you have to go back to the office on Tuesday.

American Crime

One of the new anthology dramas just popped up on Netflix. The first season of this crime series stars Timothy Hutton and Felicity Huffman, following the story of the murder of a husband and wife in Modesto, California. Racial tensions flare as the investigation and trial begin.

Where you can watch it: Netflix (Season 1)

The Expanse

The Syfy channel’s big foray back into space television is a criminally underrated show. Based on the novels by James S.A. Corey, The Expanse takes place 200 years in the future. Tensions between Earth, Mars and the Outer Planets Alliance heat up after the destruction of a water freighter. Season 2 is coming in early 2017, and we absolutely can’t wait.

Where you can watchSyfy (with cable subscription)

Halt and Catch Fire promotional still (AMC)

Halt and Catch Fire promotional still (AMC)

Halt and Catch Fire

The third season of Halt and Catch Fire just started airing on AMC last week, but if you’ve missed this fantastic show, you can catch up with the first two seasons on Netflix. The first season follows a series of characters as they build a personal computer, while the second examines their efforts to create an online gaming empire in the 1980s.

Where you can watchNetflix and AMC Steaming

Hannibal

This was a show that really shouldn’t have ended as early as it did, but at least you can watch its entire run on Amazon. The show is based on the novels Red Dragon and Hannibal by Thomas Harris, and features some amazing performances by Mads Mikkelsen as the titular cannibalistic serial killer.

Where you can watchAmazon Prime

Jonathan Strange & Mr. Norrell

The BBC’s fantastic drama based on the novel by Suzanne Clarke aired last year, but in case you missed it, Netflix has you covered. The story of two rival magicians at the height of the Napoleonic War is a wonderful adaptation and intense story of magic and power in England.

Where you can watch it: Netflix

Longmire

If you’re looking for a great set of mysteries, look no further than Longmire. Based on the crime series by Wyoming author Craig Johnson, it follows Sheriff Walt Longmire as he solves a variety of crimes in his country. The show was originally cancelled by A&E, but was rescued for a fourth season by Netflix, and which will begin airing its fifth season on September 23rd.

Where you can watchNetflix

Narcos Still

Narcos Still

Credit: Netflix

Narcos

Netflix’s historical crime drama about Columbian drug kingpin Pablo Escobar is a show that we were hooked on when we first saw it last year. Season 2 picks up with a much more vulnerable Escobar, and it just began streaming on Netflix earlier this week.

Where you can watch it: Netflix

The Night Of

HBO’s crime miniseries has been described as a long Law & Order episode, following the story of a murder in New York City that contains political and cultural overtones for a Pakistani-American student named Nasir Kahn who’s accused of murder. The finale for the show just aired, and it’s a good time to get caught up before you’re inadvertently spoiled.

Where you can watchHBO Go

Star Wars: Rebels

If animation is more your style, why not check out Star Wars Rebels? The prequel show examines the origins of the Rebel Alliance between Revenge of the Sith and A New Hope. We’re already extremely excited for its upcoming third season, which is going to be bringing back one of the Star Wars Expanded Universe’s best villains: Grand Admiral Thrawn. Now is a very, very good time to catch up.

Where you can watch: Disney XD (With cable subscription)

Stranger Things

Stranger Things

Stranger Things

Okay, you’ve probably seen the nostalgia-laden science fiction / horror show from the Duffer Brothers. We’re already big fans of this summer’s biggest hit, but if you haven’t seen the show set in 1983 about a missing child, you’re really missing out. If you have seen it, it’s time to give it another watch while you wait for Season 2.

Where you can watch it: Netflix

What else are you watching this weekend?



from The Verge http://ift.tt/oZfQdV
via IFTTT

Some rather strange history of maths

http://ift.tt/2aZMYkA


Scientific American has a guest blog post with the title: Mathematicians Are Overselling the Idea That “Math Is Everywhere, which argues in its subtitle: The mathematics that is most important to society is the province of the exceptional few—and that’s always been true. Now I’m not really interested in the substantial argument of the article but the author, Michael J. Barany, opens his piece with some historical comments that I find to be substantially wrong; a situation made worse by the fact that the author is a historian of mathematics.

Barany’s third paragraph starts as follows:

In the first agricultural societies in the cradle of civilization, math connected the heavens and the earth. Priests used astronomical calculations to mark the seasons and interpret divine will, and their special command of mathematics gave them power and privilege in their societies.

We are taking about the area loosely known as Babylon, although the names and culture changed over the millennia, and it is largely a myth, not only for this culture, that astronomical calculations were used to mark the seasons. The Babylonian astrologers certainly interpreted the divine will but they were civil servants who whilst certainly belonging to the upper echelons of society did not have much in the way of power or privilege. They were trained experts who did a job for which they got paid. If they did it well they lived a peaceful life and if they did it badly they risked an awful lot, including their lives.

Barany continues as follows:

As early economies grew larger and more complex, merchants and craftsmen incorporated more and more basic mathematics into their work, but for them mathematics was a trick of the trade rather than a public good. For millennia, advanced math remained the concern of the well-off, as either a philosophical pastime or a means to assert special authority.

It is certainly true that merchants and craftsmen in advanced societies – Babylon, Greece, Rome – used basic mathematics in their work but as these people provide the bedrock of their societies ­– food, housing etc. – I think it is safe to say that their maths based activities were in general for the public good. As for advanced maths, and here I restrict myself to European history, it appeared no earlier than 1500 BCE in Babylon and had disappeared again by the fourth century CE with the collapse of the Roman Empire, so we are talking about two millennia at the most. Also for a large part of that time the Romans, who were the dominant power of the period, didn’t really have much interest in advance maths at all.

With the rebirth of European learned culture in the High Middle ages we have a society that founded the European universities but, like the Romans, didn’t really care for advanced maths, which only really began to reappear in the fifteenth century. Barany’s next paragraph contains an inherent contradiction:

The first relatively widespread suggestions that anything beyond simple practical math ought to have a wider reach date to what historians call the Early Modern period, beginning around five centuries ago, when many of our modern social structures and institutions started to take shape. Just as Martin Luther and other early Protestants began to insist that Scripture should be available to the masses in their own languages, scientific writers like Welsh polymath Robert Recorde used the relatively new technology of the printing press to promote math for the people. Recorde’s 1543 English arithmetic textbook began with an argument that “no man can do any thing alone, and much less talk or bargain with another, but he shall still have to do with number” and that numbers’ uses were “unnumerable” (pun intended).

Barany says, “that anything beyond simple practical math ought to have a wider reach…” and then goes on to suggest that this was typified by Robert Recorde with his The Grounde of Artes from 1543. Recorde’s book is very basic arithmetic; it is an abbacus or reckoning book for teaching basic arithmetic and book keeping to apprentices. In other words it is a book of simple practical maths. Historically what makes Recorde’s book interesting is that it is the first such book written in English, whereas on the continent such books had been being produced in the vernacular as manuscripts and then later as printed books since the thirteenth century when Leonardo of Pisa produced his Libre Abbaci, the book that gave the genre its name. Abbaci comes from the Italian verb to calculate or to reckon.

What however led me to write this post is the beginning of Barany’s next paragraph:

Far more influential and representative of this period, however, was Recorde’s contemporary John Dee, who used his mathematical reputation to gain a powerful position advising Queen Elizabeth I. Dee hewed so closely to the idea of math as a secret and privileged kind of knowledge that his detractors accused him of conjuring and other occult practices.

Barany is contrasting Recorde, man of the people bringing mathematic to the masses in his opinion with Dee an elitist defender of mathematics as secret and privileged knowledge. This would be quite funny if it wasn’t contained in an essay in Scientific American. Let us examine the two founders of the so-called English School of Mathematics a little more closely.

Robert Recorde who obtained a doctorate in medicine from Cambridge University was in fact personal physician to both Edward VI and Queen Mary. He served as comptroller of the Bristol Mint and supervisor of the Dublin Mint both important high level government appointments. Dee acquired a BA at St John’s College Cambridge and became a fellow of Trinity College. He then travelled extensively on the continent studying in Leuven under Gemma Frisius. Shortly after his return to England he was thrown into to prison on suspicion of sedition against Queen Mary; a charge of which he was eventually cleared. Although consulted oft by Queen Elizabeth he never, as opposed to Recorde, managed to obtain an official court appointment.

On the mathematical side Recorde did indeed write and publish, in English, a series of four introductory mathematics textbooks establishing the so-called English School of Mathematics. Following Recorde’s death it was Dee who edited and published further editions of Recorde’s mathematics books. Dee, having studied under Gemma Frisius and Gerard Mercator, introduced modern cartography and globe making into Britain. He also taught navigation and cartography to the captains of the Muscovy Trading Company. In his home in Mortlake, Dee assembled the largest mathematics library in Europe, which functioned as a sort of open university for all who wished to come and study with him. His most important pupil was his foster son Thomas Digges who went on to become the most important English mathematical practitioner of the next generation. Dee also wrote the preface to the first English translation of Euclid’s Elements by Henry Billingsley. The preface is a brilliant tour de force surveying, in English, all the existing branches of mathematics. Somehow this is not the picture of a man, who hewed so closely to the idea of math as a secret and privileged kind of knowledge. Dee was an evangelising populariser and propagator of mathematics for everyman.

It is however Barany’s next segment that should leave any historian of science or mathematics totally gobsmacked and gasping for words. He writes:

In the seventeenth century’s Scientific Revolution, the new promoters of an experimental science that was (at least in principle) open to any observer were suspicious of mathematical arguments as inaccessible, tending to shut down diverse perspectives with a false sense of certainty.

What can I say? I hardly know where to begin. Let us just list the major seventeenth-century contributors to the so-called Scientific Revolution, which itself has been characterised as the mathematization of nature (my emphasis). Simon Stevin, Johannes Kepler, Galileo Galilei, René Descartes, Blaise Pascal, Christiaan Huygens and last but by no means least Isaac Newton. Every single one of them a mathematician, whose very substantial contributions to the so-called Scientific Revolution were all mathematical. I could also add an even longer list of not quite so well known mathematicians who contributed. The seventeenth century has also been characterised, by more than one historian of mathematics as the golden age of mathematics, producing as it did modern algebra, analytical geometry and calculus along with a whole raft full of other mathematical developments.

The only thing I can say in Barany’s defence is that he in apparently a history of modern, i.e. twentieth-century, mathematics. I would politely suggest that should he again venture somewhat deeper into the past that he first does a little more research.



from Hacker News http://ift.tt/YV9WJO
via IFTTT

Why you should aim for 100 rejections a year

http://ift.tt/292XKFf


In the book Art & Fear, authors David Bales and Ted Orland describe a ceramics class in which half of the students were asked to focus only on producing a high quantity of work while the other half was tasked with producing work of high quality. For a grade at the end of the term, the “quantity” group’s pottery would be weighed, and fifty pounds of pots would automatically get an A, whereas the “quality” group only needed to turn in one—albeit perfect—piece. Surprisingly, the works of highest quality came from the group being graded on quantity, because they had continually practiced, churned out tons of work, and learned from their mistakes. The other half of the class spent most of the semester paralyzed by theorizing about perfection, which sounded disconcertingly familiar to me—like all my cases of writer’s block.

Being a writer sometimes feels like a paradox. Yes, we should be unswerving in our missions to put passion down on paper, unearthing our deepest secrets and most beautiful bits of humanity. But then, later, each of us must step back from those raw pieces of ourselves and critically assess, revise, and—brace yourself—sell them to the hungry and unsympathetic public. This latter process is not only excruciating for most of us (hell, if we were good at sales we would be making good money working in sales), but it can poison that earlier, unselfconscious creative act of composition.

In Bird by Bird, Anne Lamott illustrates a writer’s brain as being plagued by the imaginary radio station KFKD (K-Fucked), in which one ear pipes in arrogant, self-aggrandizing delusions while the other ear can only hear doubts and self-loathing. Submitting to journals, residencies, fellowships, or agents amps up that noise. How could it not? These are all things that writers want, and who doesn’t imagine actually getting them? But we’d be much better off if only we could figure out how to turn down KFKD, or better yet, change the channel—uncoupling the word “rejection” from “failure.”

There are two moments from On Writing, Stephen King’s memoir and craft book, that I still think about more than 15 years after reading it: the shortest sentence in the world, “Plums defy!” (which he presented as evidence that writing need not be complex), and his nailing of rejections. When King was in high school, he sent out horror and sci-fi fantasy stories to pulpy genre magazines. For the first few years, they all got rejected. He stabbed his rejection slips onto a nail protruding from his bedroom wall, which soon grew into a fat stack, rejection slips fanned out like kitchen dupes on an expeditor’s stake in a crowded diner. Done! That one’s done! Another story bites the dust! That nail bore witness to King’s first attempts at writing, before he became one of the most prolific and successful authors in the world.



from Hacker News http://ift.tt/YV9WJO
via IFTTT

This is why it's so great that the iPhone 7 is killing off the headphone jack - BGR

http://ift.tt/2c8dJkF


Apple’s new iPhone 7 and iPhone 7 Plus will be packed full of all sorts of technology when they debut next week. They’ll sport brand new cameras that should be among the best in the world, and they’ll pack new Apple A10 processors that will undoubtedly blow away the competition. But as the rumor mill heats up ahead of next week’s announcement, attention still isn’t focused on all of the great new features Apple’s next-generation iPhones will usher in. Instead, everyone is fixed on two “problems” with Apple’s iPhone 7 lineup.

The first issue is the design, which many people complain looks too much like Apple’s previous two generations of iPhones. That’s a valid complaint, though we would counter that a familiar design hardly means the iPhone 7 and iPhone 7 Plus are going to be boring. The second issue is the lack of a standard headphone jack, but there’s a silver lining there as well.

DON’T MISS: Yup, HTC is definitely trolling us with its brand new iPhone clone

Apple has made no announcements at this point so absolutely nothing is confirmed when it comes to the company’s next-generation iPhone 7 and iPhone 7 Plus. That said, the new phones aren’t going to have 3.5mm headphone jacks. We’d bet he barn on it.

For many users, that change means they can no longer simply plug their standard headphones into the iPhone. Is that a big deal? Well, it would be if Apple didn’t plan to include a 3.5mm to Lightning adapter right in the box with its new iPhones. But since there is an adapter there, this is a non-story. The same headphones you’re using right now will connect to your iPhone 7 later this month, just like they connect to your iPhone 6s or 6s Plus right now.

So what’s so great about Apple’s move to ditch the headphone jack? Aside from a thinner design, better water-resistance and a step toward a future iPhone with no ports at all, people who decide to upgrade and purchase new Lightning headphones will enjoy much higher-quality sound. And beyond that, Apple’s move is pushing other companies to embrace the Lightning connector, which means they can improve their products as a result.

Here’s a perfect example: Check out these noise cancelling earbuds from Bose. They’re fantastic and they actively cancel noise rather than just isolate sound like regular earbuds. But there’s an obvious downside: using these headphones means you have yet another device to worry about charging. Since they use active noise cancellation, they need to be powered.

Now take a look at the new Q Adapt In-Ear Earbuds announced on Thursday by Libratone. They also offer active noise cancellation so they also require power, but you’ll never have to charge them even once. Why? Because they draw power from a connected iPhone, iPad or iPod touch through the Lightning port.

Apple has been using a Lightning connector for years now on its iOS devices, but we’re about to see an influx of great new audio products that are improved by the utilization of Apple’s digital Lightning connector. Why now? That’s right, it’s because the new iPhone 7 and iPhone 7 Plus won’t have 3.5mm audio ports.

Sure there are drawbacks — using Lightning headphones instead of older headphones means you can only connect them to Apple devices. But for many, the benefits will outweigh the downsides; keeping your old headphones around to use on airplanes when you travel isn’t really such a big deal, is it?



from Top Stories - Google News http://ift.tt/1OhtLUV
via IFTTT

Is artificial intelligence permanently inscrutable?

http://ift.tt/2bEpB0g

Dmitry Malioutov can’t say much about what he built.

As a research scientist at IBM, Malioutov spends part of his time building machine learning systems that solve difficult problems faced by IBM’s corporate clients. One such program was meant for a large insurance corporation. It was a challenging assignment, requiring a sophisticated algorithm. When it came time to describe the results to his client, though, there was a wrinkle. “We couldn’t explain the model to them because they didn’t have the training in machine learning.”

In fact, it may not have helped even if they were machine learning experts. That’s because the model was an artificial neural network, a program that takes in a given type of data—in this case, the insurance company’s customer records—and finds patterns in them. These networks have been in practical use for over half a century, but lately they’ve seen a resurgence, powering breakthroughs in everything from speech recognition and language translation to Go-playing robots and self-driving cars.

Hidden meanings: In neural networks, data is passed from layer to layer, undergoing simple transformations at each step. Between the input and output layers are hidden layers, groups of nodes and connections that often bear no human-interpretable patterns or obvious connections to either input or output. “Deep” networks are those with many hidden layers.Michael Nielsen / NeuralNetworksandDeepLearning.com

As exciting as their performance gains have been, though, there’s a troubling fact about modern neural networks: Nobody knows quite how they work. And that means no one can predict when they might fail.

Take, for example, an episode recently reported by machine learning researcher Rich Caruana and his colleagues. They described the experiences of a team at the University of Pittsburgh Medical Center who were using machine learning to predict whether pneumonia patients might develop severe complications. The goal was to send patients at low risk for complications to outpatient treatment, preserving hospital beds and the attention of medical staff. The team tried several different methods, including various kinds of neural networks, as well as software-generated decision trees that produced clear, human-readable rules.

The neural networks were right more often than any of the other methods. But when the researchers and doctors took a look at the human-readable rules, they noticed something disturbing: One of the rules instructed doctors to send home pneumonia patients who already had asthma, despite the fact that asthma sufferers are known to be extremely vulnerable to complications.

The model did what it was told to do: Discover a true pattern in the data. The poor advice it produced was the result of a quirk in that data. It was hospital policy to send asthma sufferers with pneumonia to intensive care, and this policy worked so well that asthma sufferers almost never developed severe complications. Without the extra care that had shaped the hospital’s patient records, outcomes could have been dramatically different.

Also in Artificial Intelligence  

When Google’s AlphaGo computer program triumphed over a Go expert earlier this year, a human member of the Google team had to physically move the pieces. Manuela Veloso, the head of Carnegie Mellon’s machine learning department, would have done it...READ MORE

The hospital anecdote makes clear the practical value of interpretability. “If the rule-based system had learned that asthma lowers risk, certainly the neural nets had learned it, too,” wrote Caruana and colleagues—but the neural net wasn’t human-interpretable, and its bizarre conclusions about asthma patients might have been difficult to diagnose.1 If there hadn’t been an interpretable model, Malioutov cautions, “you could accidentally kill people.”

This is why so many are reluctant to gamble on the mysteries of neural networks. When Malioutov presented his accurate but inscrutable neural network model to his own corporate client, he also offered them an alternative, rule-based model whose workings he could communicate in simple terms. This second, interpretable, model was less accurate than the first, but the client decided to use it anyway—despite being a mathematically sophisticated insurance company for which every percentage point of accuracy mattered. “They could relate to [it] more,” Malioutov says. “They really value intuition highly.”

Even governments are starting to show concern about the increasing influence of inscrutable neural-network oracles. The European Union recently proposed to establish a “right to explanation,” which allows citizens to demand transparency for algorithmic decisions.2 The legislation may be difficult to implement, however, because the legislators didn’t specify exactly what “transparency” means. It’s unclear whether this omission stemmed from ignorance of the problem, or an appreciation of its complexity.

Some researchers hope to eliminate the need to choose—to let us have our many-layered cake, and understand it, too.

In fact, some believe that such a definition might be impossible. At the moment, though we can know everything there is to know about what neural networks are doing—they are, after all, just computer programs—we can discern very little about how or why they are doing it. The networks are made up of many, sometimes millions, of individual units, called neurons. Each neuron converts many numerical inputs into a single numerical output, which is then passed on to one or more other neurons. As in brains, these neurons are divided into “layers,” groups of cells that take input from the layer below and send their output to the layer above.

Neural networks are trained by feeding in data, then adjusting the connections between layers until the network’s calculated output matches the known output (which usually consists of categories) as closely as possible. The incredible results of the past few years are thanks to a series of new techniques that make it possible to quickly train deep networks, with many layers between the first input and the final output. One popular deep network called AlexNet is used to categorize photographs—labeling them according to such fine distinctions as whether they contain a Shih Tzu or a Pomeranian. It consists of over 60 million “weights,” each of which tell each neuron how much attention to pay to each of its inputs. “In order to say you have some understanding of the network,” says Jason Yosinski, a computer scientist affiliated with Cornell University and Geometric Intelligence, “you’d have to have some understanding of these 60 million numbers.”

Even if it were possible to impose this kind of interpretability, it may not always be desirable. The requirement for interpretability can be seen as another set of constraints, preventing a model from a “pure” solution that pays attention only to the input and output data it is given, and potentially reducing accuracy. At a DARPA conference early this year, program manager David Gunning summarized the trade-off in a chart that shows deep networks as the least understandable of modern methods. At the other end of the spectrum are decision trees, rule-based systems that tend to prize explanation over efficacy.

What vs. why: Modern learning algorithms show a tradeoff between human interpretability, or explainability, and their accuracy. Deep learning is both the most accurate and the least interpretable.Darpa

The result is that modern machine learning offers a choice among oracles: Would we like to know what will happen with high accuracy, or why something will happen, at the expense of accuracy? The “why” helps us strategize, adapt, and know when our model is about to break. The “what” helps us act appropriately in the immediate future.

It can be a difficult choice to make. But some researchers hope to eliminate the need to choose—to allow us to have our many-layered cake, and understand it, too. Surprisingly, some of the most promising avenues of research treat neural networks as experimental objects—after the fashion of the biological science that inspired them to begin with—rather than analytical, purely mathematical objects. Yosinski, for example, says he is trying to understand deep networks “in the way we understand animals, or maybe even humans.” He and other computer scientists are importing techniques from biological research that peer inside networks after the fashion of neuroscientists peering into brains: probing individual components, cataloguing how their internals respond to small changes in inputs, and even removing pieces to see how others compensate.

Having built a new intelligence from scratch, scientists are now taking it apart, applying to these virtual organisms the digital equivalents of a microscope and scalpel.


Yosinski sits at a computer terminal, talking into a webcam. The data from the webcam is fed into a deep neural net, while the net itself is being analyzed, in real time, using a software toolkit Yosinski and his colleagues developed called the Deep Visualization toolkit. Clicking through several screens, Yosinski zooms in on one neuron in the network. “This neuron seems to respond to faces,” he says in a video record of the interaction.3 Human brains are also known to have such neurons, many of them clustered in a region of the brain called the fusiform face area. This region, discovered over the course of multiple studies beginning in 1992,4, 5 has become one of the most reliable observations in human neuroscience. But where those studies required advanced techniques like positron emission tomography, Yosinski can peer at his artificial neurons through code alone.

Brain activity: A single neuron in a deep neural net (highlighted by a green box) responds to Yosinski’s face, just as a distinct part of the human brain reliably responds to faces (highlighted in pink).Left: Jason Yosinski, et al. Understanding Neural Networks Through Deep Visualization. Deep Learning Workshop, International Conference on Machine Learning (ICML) (2015). Right: www.nsf.gov

This approach lets him map certain artificial neurons to human-understandable ideas or objects, like faces, which could help turn neural networks into intuitive tools. His program can also highlight which aspects of a picture are most important to stimulating the face neuron. “We can see that it would respond even more strongly if we had darker eyes, and rosier lips,” he says.

To Cynthia Rudin, professor of computer science and electrical and computer engineering at Duke University, these “post-hoc” interpretations are by nature problematic. Her research focuses on building rule-based machine learning systems applied to domains like prison sentencing and medical diagnosis, where human-readable interpretations are possible—and critically important. But for problems in areas like vision, she says, “Interpretations are completely within the eye of the beholder.” We can simplify a network response by identifying a face neuron, but how can we be certain that’s really what it’s looking for? Rudin’s concerns echo the famous dictum that there may be no simpler model of the visual system than the visual system itself. “You could have many explanations for what a complex model is doing,” she says. “Do you just pick the one you ‘want’ to be correct?”

Yosinski’s toolkit can, in part, counter these concerns by working backward, discovering what the network itself “wants” to be correct—a kind of artificial ideal. The program starts with raw static, then adjusts it, pixel by pixel, tinkering with the image using the reverse of the process that trained the network. Eventually it finds a picture that elicits the maximum possible response of a given neuron. When this method is applied to AlexNet neurons, it produces caricatures that, while ghostly, unquestionably evoke the labeled categories.

Idealized cats: Examples of synthetic ideal cat faces, generated by the Deep Visualization toolkit. These faces are generated by tweaking a generic starting image pixel by pixel, until a maximum response from AlexNet’s face neuron is achieved.Jason Yosinski, et al. Understanding Neural Networks Through Deep Visualization. Deep Learning Workshop, International Conference on Machine Learning (ICML) (2015).

This seems to support his claim that the face neurons are indeed looking for faces, in some very general sense. But there’s a catch. To generate those pictures, Yosinski’s procedure relies on a statistical constraint (called a natural image prior) that confines it to producing images that match the sorts of structure that one finds in pictures of real-world objects. When he removes those rules, the toolkit still settles on an image that it labels with maximum confidence, but that image is pure static. In fact, Yosinski has shown that in many cases, the majority of images that AlexNet neurons prefer appear to humans as static. He readily admits that “it’s pretty easy to figure out how to make the networks say something extreme.”

To avoid some of these pitfalls, Dhruv Batra, an assistant professor of electrical and computer engineering at Virginia Tech, takes a higher-level experimental approach to interpreting deep networks. Rather than trying to find patterns in their internal structure—“people smarter than me have worked on it,” he demurs—he probes how the networks behave using, essentially, a robotic version of eye tracking. His group, in a project led by graduate students Abhishek Das and Harsh Agrawal, asks a deep net questions about an image, like whether there are drapes on a window in a given picture of a room. Unlike AlexNet or similar systems, Das’ network is designed to focus on only a small patch of an image at a time. It moves its virtual eyes around the picture until it has decided it has enough information to answer the question. After sufficient training, the deep network performs extremely well, answering about as accurately as the best humans.

Trained machines are exquisitely well suited to their environment—and ill-adapted to any other.

Das, Batra, and their colleagues then try to get a sense of how the network makes its decisions by investigating where in the pictures it chooses to look. What they have found surprises them: When answering the question about drapes, the network doesn’t even bother looking for a window. Instead, it first looks to the bottom of the image, and stops looking if it finds a bed. It seems that, in the dataset used to train this neural net, windows with drapes may be found in bedrooms.

While this approach does reveal some of the inner workings of the deep net, it also reinforces the challenge presented by interpretability. “What machines are picking up on are not facts about the world,” Batra says. “They’re facts about the dataset.” That the machines are so tightly tuned to the data they are fed makes it difficult to extract general rules about how they work. More importantly, he cautions, if you don’t know how it works, you don’t know how it will fail. And when they do they fail, in Batra’s experience, “they fail spectacularly disgracefully.”

Some of the obstacles faced by researchers like Yosinski and Batra will be familiar to scientists studying the human brain. Questions over the interpretations of neuroimaging, for example, are common to this day, if not commonly heeded. In a 2014 review of the field, cognitive neuroscientist Martha Farah wrote that “the worry ... is that [functional brain] images are more researcher inventions than researcher observations.”6 The appearance of these issues in very different realizations of intelligent systems suggests that they could be obstacles, not to the study of this or that kind of brain, but to the study of intelligence itself.


Is chasing interpretability a fool’s errand? Zachary Lipton, of the University of California, San Diego, questions both the enterprise of interpreting neural networks, and the value of building interpretable machine learning models at all. He submitted a provocative paper to a workshop (organized by Malioutov and two of his colleagues) on Human Interpretability at this year’s International Conference on Machine Learning (ICML). The paper was titled “The Mythos of Model Interpretability,” and it challenged the definitions of interpretability, as well as the reasons that researchers give for seeking it.

Lipton points out that many scholars disagree over the very concept, which suggests to him either that interpretability doesn’t exist—or that there are many possible meanings. He argues that the impulse to interpret shouldn’t be trusted, and that researchers should, instead, use neural networks to free themselves to “explore ambitious models.” Interpretability, he says, keeps the models from reaching their full potential. He argues that one purpose of the field is to “build models which can learn from a greater number of features than any human could consciously account for.”

But this ability is both feature and failing: If we don’t understand how network output is generated, then we can’t know what aspects of the input were necessary, or even what might be considered input at all. Case in point: In 1996, Adrian Thompson of Sussex University used software to design a circuit by applying techniques similar to those that train deep networks today. The circuit was to perform a straightforward task: discriminate between two audio tones. After thousands of iterations, shuffling and rearranging circuit components, the software found a configuration that performed the task nearly perfectly.

Thompson was surprised, however, to discover that the circuit used fewer components than any human engineer would have used—including several that were not physically connected to the rest, and yet were somehow still necessary for the circuit to work properly.

He took to dissecting the circuit. After several experiments, he learned that its success exploited subtle electromagnetic interference between adjacent components. The disconnected elements influenced the circuit by causing small fluctuations in local electrical fields. Human engineers usually guard against these interactions, because they are unpredictable. Sure enough, when Thompson copied the same circuit layout to another batch of components—or even changed the ambient temperature—it failed completely.

The circuit exhibited a hallmark feature of trained machines: They are as compact and simplified as they can be, exquisitely well suited to their environment—and ill-adapted to any other. They pick up on patterns invisible to their engineers; but can’t know which of those patterns exist nowhere else. Machine learning researchers go to great lengths to avoid this phenomenon, called “overfitting,” but as these algorithms are used in more and more dynamic situations, their brittleness will inevitably be exposed.

Get the Nautilus newsletter

The newest and most popular articles delivered right to your inbox!

To Sanjeev Arora, a professor of computer science at Princeton University, this fact is the primary motivation to seek interpretable models that allow humans to intervene and adjust the networks. Arora points to two problems that could represent hard limits on the capabilities of machines in the absence of interpretability. One is “composability”—when the task at hand involves many different decisions (as with Go, or self-driving cars), networks can’t efficiently learn which are responsible for a failure. “Usually when we design things, we understand the different components and then we put them together,” he says. This allows humans to adjust components that aren’t appropriate for a given environment.

The other problem with leaving interpretability unsolved is what Arora calls “domain adaptability”—the ability to flexibly apply knowledge learned in one setting to another. This is a task human learners do very well, but machines can fail at in surprising ways. Arora describes how programs can be catastrophically incapable of adjusting to even subtle contextual shifts of the sort that humans handle with ease. For instance, a network trained to parse human language by reading formal documents, like Wikipedia, can fail completely in more vernacular settings, like Twitter.

By this view, interpretability seems essential. But do we understand what we mean by the word? Pioneering computer scientist Marvin Minsky coined the phrase “suitcase word” to describe many of the terms—such as “consciousness” or “emotion”—we use when we talk about our own intelligence.7 These words, he proposed, reflect the workings of many different underlying processes, which are locked inside the “suitcase.” As long as we keep investigating these words as stand-ins for the more fundamental concepts, the argument went, our insight will be limited by our language. In the study of intelligence, could interpretability itself be such a suitcase word?

While many of the researchers I spoke to are optimistic that theorists will someday unlock the suitcase and discover a single, unifying set of principles or laws that govern machine (and, perhaps, human) learning, akin to Newton’s Principia, others warn that there is little reason to expect this. Massimo Pigliucci, a professor of philosophy at City University of New York, cautions that “understanding” in the natural sciences—and, by extension, in artificial intelligence—might be what Ludwig Wittgenstein, anticipating Minsky, called a “cluster concept,” one which could admit many, partly distinct, definitions. If “understanding” in this field does come, he says, it could be of the sort found not in physics, but evolutionary biology. Rather than Principia, he says, we might expect Origin of the Species.

This doesn’t mean, of course, that deep networks are a harbinger of some new kind of autonomous life. But they could turn out to be just as difficult to understand as life. The field’s incremental, experimental approaches and post-hoc interpretations may not be some kind of desperate feeling around in the dark, hoping for theory to shine a light. They could, instead, be the only kind of light we can expect. Interpretability may emerge piecemeal, as a set of prototypic examples of “species” arranged in a taxonomy defined by inference and contingent, context-specific explanations.

At the ICML workshop’s close, some of the presenters appeared on a panel to try and define “interpretability.” There were as many responses as there were panelists. After some discussion the group seemed to find consensus that “simplicity” was necessary for a model to be interpretable. But, when pressed to define simplicity, the group again diverged. Is the “simplest” model the one that relies on the fewest number of features? The one that makes the sharpest distinctions? Is it the smallest program? The workshop closed without an agreed-upon answer, leaving the definition of one inchoate concept replaced by another.

As Malioutov puts it, “Simplicity is not so simple.”


Aaron M. Bornstein is a researcher at the Princeton Neuroscience Institute. His research investigates how we use memories to make sense of the present and plan for the future.


References

1. Caruana, R., et. al Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 1721-1730 (2015).

2. Metz, C. Artificial Intelligence Is Setting Up the Internet for a Huge Clash with Europe. Wired.com (2016).

3. Yosinski, J., Clune, J., Nguyen, A., Fuchs, T., & Lipson, H. Understanding neural networks through deep visualization. arXiv:1506.06579 (2015).

4. Sergent, J., Ohta, S., & MacDonald, B. Functional neuroanatomy of face and object processing. A positron emission tomography study. Brain 115, 15–36 (1992).

5. Kanwisher. N., McDermott, J., & Chun, M.M. The fusiform face area: A module in human extrastriate cortex specialized for face perception. The Journal of Neuroscience 17, 4302–4311 (1997).

6. Farah, M.J. Brain images, babies, and bathwater: Critiquing critiques of functional neuroimaging. Interpreting Neuroimages: An Introduction to the Technology and Its Limits 45, S19-S30 (2014).

7. Brockman, J. Consciousness Is a Big Suitcase: A talk with Marvin Minsky. Edge.org (1998).



from Hacker News http://ift.tt/YV9WJO
via IFTTT

Netflix's New "Slow TV" Shows are Meditative and Chill

http://ift.tt/2bTYIBn


If you really want a little chill with your Netflix, the service just added a bunch of “slow TV” shows, things like long train rides through the countryside, relaxing views of canal rides, crackling fireplaces, quiet video of people knitting, and so on. They’re all things you can put on in the background while you work, focus, or just relax.

“Slow TV,” as Kottke describes here is just the airing of a completely ordinary event from start to finish (much like our own Facebook Live “Lunchtime Views” series.) Here’s what you can tune into if you’re looking for something maybe a little ASMR, or maybe just super chill and not-at-all-attention grabbing:

National Firewood Evening
National Firewood Morning
National Firewood Night
National Knitting Evening
National Knitting Morning
National Knitting Night
Northern Passage
Northern Railway
Salmon Fishing
The Telemark Canal
Train Ride Bergen to Oslo

If you’re noticing a Scandinavian thrust to some of these, it’s intentional—a lot of these programs really caught on in Norway, so that’s where Netflix is getting them from. Some of them aren’t exactly the same end-to-end shows that were originally aired, if you’re familiar with them, but they’re enough to help you relax a little after a long day. Hit the link below to read a bit more about them.

Slow TV comes to Netflix | Kottke



from Lifehacker http://lifehacker.com
via IFTTT

How Anesthesia Works, and Why It’s So Complicated

http://ift.tt/2bVf67A


There’s a lot more to “going under” than just counting down from ten and then waking up when the surgery is over, although it’s great we can do that. Here’s how anesthesia affects you, what makes it risky enough for your doctor to warn you about it before surgery

The video above goes into the details, but if you can’t watch, here are the basics. There are three main types of anesthesia: regional, inhalational, and intravenous. Regional anesthesia prevents electrical pain signals from going from one part of your body to your brain. Inhalational anesthesia affects your whole nervous system, including your brain and is often used together with intravenous anesthesia to put you under and keep you unconscious during major surgery.

Anesthesia affects your nervous system and brain, but also other vital organs like your heart, lungs, and liver, which is why it is so important the anesthesiologist mixes the right balance for you. They also monitor your vital signs during surgery so they can adjust the anesthesia as needed. Check out the video above for a little more history here, what some of the common drugs used are and where they came from, and more.

How Does Anesthesia Work? | Ted-Ed (YouTube)



from Lifehacker http://lifehacker.com
via IFTTT