Westworld’s season 3 trailer questions the relationships between hosts and humans

https://ift.tt/2LvTpNZ

Part of Westworld’s charm has always been trying to solve the puzzle before it’s spelled out, but the third season is experiencing a shift in direction. A new trailer for the third season introduces a different futuristic world, set in neo-Los Angeles, with a story that focuses more on the lives of robots and humans, rather than an overarching riddle.

The third season picks up right where the second ended, after the massacre in Westworld and Dolores finally free. It’s in the real world that she meets a construction worker named Caleb (played by Breaking Bad’s Aaron Paul), and the pair develop an interesting relationship.

Dolores, who is out enjoying her newfound freedom, is at the heart of this story, but not all hosts are enjoying the same freedom. Maeve apparently remains inside a World War II-themed park, and there appears to be just as much chaos inside that park as there is outside of it. What’s evident from this trailer is that people are reckoning with the world they’ve built, fictional and interactive, and this season will see big questions addressed. For example, what if you could just shut it all down?

“This is season is a little less of a guessing game and more of an experience with the hosts finally getting to meet their makers,” co-showrunner Jonathan Nolan told Entertainment Weekly.

Nolan added that part of the plan was always to place “our characters in radically different circumstances” with every new season. The first two mostly took place inside of Westworld, but this season seems like it’ll take place largely outside the park.

Westworld’s third season premieres in 2020.



from The Verge https://ift.tt/1jLudMg
via IFTTT

Watch the first intense trailer for the next season of The Expanse

https://ift.tt/2Su2qYt

Promotional images for The Expanse, featuring Drummer, Amos, Holden, Naomi, and Alex.Image: Amazon Studios

Amazon has finally revealed when the next season of The Expanse will begin streaming: December 13th, 2019. The company made the announcement today at San Diego Comic-Con, releasing a trailer and a longer clip of the upcoming fourth season.

Based on the science fiction book series by James S.A. Corey, The Expanse is set several centuries in the future, where humanity has colonized the solar system and broken into three major factions: Earth, Mars, and the Belters, an alliance of settlements in the asteroid belt and outer planets. The bulk of the series follows the crew of a spaceship called the Rocinante as they find themselves in the midst of an interplanetary war, and have to contend with the discovery of an alien substance called the Protomolecule that’s been weaponized by a malevolent Earth corporation, and which went on to create an gate to thousands of worlds beyond our home system.

This latest season will be based on the fourth installment of the series, Cibola Burn, in which the crew of the Rocinante are dispatched to a colonial world known as Ilus to settle a dispute between the corporation that claimed the world, and the settlers who touched down on it first. The planet is also home to the remnants of an alien civilization that created the protomolecule. The trailer shows off some of the troubles that the crew will face, and some new faces, like Adolphus Murtry (played by Pacific Rim’s Burn Gorman), a brutal and temperamental security chief for the company that’s laid claim to the planet, Royal Charter Energy.

In addition to the trailer for the season, Amazon released a longer clip from the show, showing off the crew of the Rocinante landing on the Ilus, which will be one of the major locations this season.

Some spoilers ahead.

The Syfy channel picked up the series for adaptation back in 2015, and over the course of three seasons, adapted the first three books, Leviathan Wakes, Caliban’s War, and Abbadon’s Gate. Syfy canceled the series last year, but the series was quickly picked up by Amazon for a fourth season.

The first three seasons of the show tell a relatively contained story: tensions between Earth, Mars, and the Belters bring the solar system to war, and the alien protomolecule upends that when it creates a massive ring gate that connects the solar system to a vast, interstellar network. Up until this point in the series, humanity has been fighting amongst itself, but now, people have access to thousands of new, uninhabited planets to travel to. The Syfy’s tenure on the series ended right at that point, and had the show not continued, it would have been a natural ending point.

Image: Amazon Studios

But the book series will last for nine installments, and the penultimate book, Tiamat’s Wrath, hit bookstores earlier this year. Cibola Burn is a good opportunity for Amazon to reboot the show, as it brought in a couple of new major storylines for the series. Series showrunner Naren Shankar told IGN on Wednesday that the shift from Syfy to Amazon brings them some new freedoms that they didn’t have before. “We’re no longer bound by the archaic content, language, and runtime restrictions you’re constantly forced to deal with on broadcast and basic cable.”

The Expanse’s fourth season will debut on December 13th on Amazon Prime Video.



from The Verge https://ift.tt/1jLudMg
via IFTTT

Ring has made it ridiculously easy to surround your house with smart lights and cameras

https://ift.tt/2XAxU00

<em>Ring Stick Up Cam.</em>

It wasn’t that long ago when installing security cameras and lighting around your home was an expensive, cumbersome process. You needed to wire power and connectivity to the cameras, dig trenches or run wiring for the lights, and then rig up a system to control and monitor it all.

Ring’s latest products this year address those challenges head-on: they are inexpensive, easy to install, and run on batteries or even solar so you don’t have to run wiring to them at all. Ring has had battery-operated doorbells for a few years, but now, it has a whole suite of products that require no wiring and very little work to install.

For the past few weeks, I’ve been testing Ring’s Stick Up Cam, Smart Lighting, and Door View Cam, each of which addresses a different need depending on your situation. If you’ve been wanting to install security lighting and cameras in your home, it’s never been easier than now.


Ring Stick Up Cam

Ring Stick Up Cam Battery.

The $179 Stick Up Cam Battery is a battery-powered connected security camera, complete with motion alerts, full HD resolution, and night vision. It can be mounted almost anywhere inside or outside of your home in just minutes.

One of the more clever things about the Stick Up Cam is its flexible, built-in stand, which lets you either place it on a shelf or mantle inside or invert it and quickly install it under an awning outside with just three screws, as I’ve done. It can also be positioned to mount the camera flat against a wall.

From there, the Stick Up Cam behaves very similarly to Ring’s popular video doorbells: it connects to your home Wi-Fi network, records clips and provides push alerts to your phone when it detects motion, and has a two-way microphone and speaker so you can communicate with someone who is in front of the camera with your phone or Amazon Echo Show. You can customize the range for the motion detection as well as the time of day when you’ll get alerts so you don’t get pinged all day long if it’s near a busy entryway. Ring’s app provides a way to view clips that are captured by the camera as well as adjust its motion detection and alert settings. The Stick Up Cam allows you to get motion alerts, view the live feed, and use the two-way communication features out of the box. But to review any recordings, you’ll need to pay for Ring’s Protect cloud service, which starts at $3 per month or $30 per year.

I mounted the Stick Up Cam above a deck on the side of my house, which seems to make the most sense for this kind of product. I can have it monitor an entryway — the door to my deck — without the need to install a full doorbell that won’t ever be used. The Stick Up Cam uses the same battery as the Ring Video Doorbell and Video Doorbell 2, and in my testing, it should last about three months between charges. Ring also sells a wired version of the Stick Up Cam and one that comes with a solar panel, but both of those require more complicated installs. Changing the battery doesn’t even require taking the camera down from its mount, but if you’re putting in a spot that’s way high up and hard to reach, you might want to consider just getting the wired version and deal with the extra install process at the beginning.

Smart Lighting

Ring Smart Lighting Pathlight.

The new Smart Lighting line is Ring’s first product of this kind, coming from the company’s acquisition of lighting company Mr Beams. Simplified, the Smart Lighting line is a set of exterior lights and motion detectors for your home that are connected to the internet, so you can manage them and get notifications from them in the Ring app. They function a lot like Ring’s various outdoor cameras, without the whole camera part.

The line includes floodlights, spotlights, a light to mount on steps, a motion detector, and pathlights for a walkway. All of the lights can run on battery power and wirelessly communicate with the necessary $49.99 Smart Lighting Bridge to connect them to the internet through your home Wi-Fi network. There’s also a transformer that can be installed on existing landscape lighting to connect that to Ring’s system.

I’ve been testing the Pathlights, which are available in a two-pack with the Bridge included for $79.99 or standalone for $29.99, as well as the $24.99 Smart Lighting Motion Sensor. The Pathlights actually have motion sensors built into them, so they don’t need the secondary sensor, but if you want to trigger them from farther distances or around blind corners, the motion sensor can do that wirelessly.

The Ring Smart Lighting Pathlight has adjustable brightness and 360 degrees of lighting. It also has a built-in motion sensor.

The Pathlights run on four D-cell batteries for up to a year, according to Ring, and they provide 360 degrees of lighting. You can adjust the brightness, timeout duration, and when they turn on and off through the Ring app.

Setting up the Smart Lighting system can be done in just a few minutes

The Smart Lighting system is easy to set up — you do it all through the Ring app, and it can be up and running in minutes — and requires no wiring or time-consuming installation process. But compared to standard motion-sensing lights you can buy, they are expensive. The real worth of the Smart Lighting system is found when it’s paired with Ring’s other products, like the video doorbells or cameras. You can use the motion detectors on the Smart Lighting products to trigger mobile alerts or the cameras to start recording clips, or you can control them with voice commands to Amazon’s Alexa assistant.

The Ring Smart Lighting Bridge is necessary to connect the lights to the internet and control them with the app.

Since I didn’t really need to use the Motion Sensor to turn on the lights, I stuck it in my mailbox (about 70 feet from where I have the bridge set up in my living room) and set it to give me push alerts whenever it detected motion. This has reliably worked to let me know when the mail has arrived and has been more reliable than the Zigbee contact sensors I’ve used in the past for the same purpose. Ring says it is using a proprietary RF protocol to connect the lights to the bridge, which provides longer distances and more reliable connectivity.

Overall, the Smart Lighting system’s value is in its ease of use and expansive connectivity. You’d be hard-pressed to find another connected lighting system that’s this accessibly priced and easy to install.

Door View Cam

The Ring Door View Cam can be installed in place of a standard peephole viewer.

Ring is perhaps best known for its video doorbells, which let you get alerts when the bell is rung, see who’s at your door, and speak to them through your phone. But even though it has an extensive line of wired and battery operated versions, none of those are particularly useful for apartment dwellers.

That’s where the $199 Door View Cam comes in. It’s basically a Ring Video Doorbell 2 that can be mounted on the peephole of a door without damaging the door itself. You can remove the Door View Cam when you move out and replace the original peephole without leaving a mark.

The Door View Cam is basically a Ring Video Doorbell 2 that can fit in a standard peephole

Much like the Video Doorbell 2, the Door View Cam is a 1080p camera with night vision and motion detection. It runs on a rechargeable battery (the same one as the Stick Up Cam and Ring’s other video doorbells), and it allows for two-way communication through your phone.

Cleverly, the Door View Cam preserves the ability to look through the peephole to see who’s there, so you don’t have to pull your phone out just to see who’s on the other side of the door. There’s also a cover on the inside of the door to prevent someone from looking back through it at you.

You can still look through the peephole when the Door View Cam is installed or block the view with a privacy shield.

Ring says it takes as few as five minutes to install the Door View Cam. Based on my testing with a demo door, I agree, provided your door’s thickness is between 34mm (about 1.3 inches) and 55mm (about 2.2 inches). The Door View Cam comes with all of the tools and hardware you need to set it up.

Like the other video doorbells Ring makes, you can adjust the zones for motion detection on the Door View Cam, which can help prevent false triggers if there’s a lot of foot traffic in front of your door. It’s also possible to configure “privacy zones,” which will prevent the camera from recording areas you specify, such as a neighbor’s entryway.

At $200, the Door View Cam is a little harder to justify compared to Ring’s more permanent video doorbells, as it is only really useful to apartment dwellers who have regular visitors at their door and not ones with an external buzzer or lobby in their building. But it does provide an option for those who can’t use a traditional video doorbell that requires a permanent installation.


The overarching theme with Ring’s product releases this year is ease of installation and use. You can get other video doorbells (though you’ll have a hard time finding another option that works on batteries), security cameras, and landscape / security lighting for a lot less than what Ring charges, but you’ll likely have to deal with much more complicated wiring power and connectivity, if connectivity is even an option. And then you won’t have the integration across the whole system that Ring is offering. Ring’s combination of wire-free installation and integration across its entire line of products is tough to beat, even if it does cost more at the outset.

Photography by Dan Seifert / The Verge

Vox Media has affiliate partnerships. These do not influence editorial content, though Vox Media may earn commissions for products purchased via affiliate links. For more information, see our ethics policy.



from The Verge https://ift.tt/1jLudMg
via IFTTT

MIT’s Gen programming system flattens the learning curve for AI projects

https://ift.tt/2Jnl8NB


Countless online courses promise to inculcate budding programmers in the ways of AI, but a team of MIT researchers hopes to flatten the learning curve further. In a paper (“Gen: A General-Purpose Probabilistic Programming System with Programmable Inference“) presented this week at the Programming Language Design and Implementation Conference, they propose Gen, a novel probabilistic programming system that lets users write algorithms without having to contend with equations or high-performance code.

Gen’s source code is publicly available, and will be presented at upcoming developer conferences including Strange Loop in St. Louis, Missouri this September and JuliaCon in Baltimore next month.

“One motivation of this work is to make automated AI more accessible to people with less expertise in computer science or math,” said first author and PhD candidate in MIT’s department of electrical engineering and computer science Marco Cusumano-Towner. “We also want to increase productivity, which means making it easier for experts to iterate and prototype their AI systems [rapidly].”

To this end, the paper’s coauthors demonstrate that a short Gen program written in Julia, a general-purpose programming language also developed at MIT, can perform computer vision tasks like inferring 3D body poses, while another Gen program can generate sophisticated statistical models typically used to interpret and predict data patterns. The team says it’s also well-suited to the fields of robotics and statistics.

Gen includes components that perform graphics rendering, deep learning, and probability simulation behind the scenes in tandem, improving task accuracy and speed by “several orders of magnitude” compared with previous leading systems. It additionally provides a high-level and highly efficient infrastructure for inference tasks including (but not limited to) variational inference and deep learning, and it enables users to combine generative models written in Julia with systems written in Google’s TensorFlow machine learning framework and custom algorithms based on numerical optimization techniques.

Gen has already seen some pickup, according to the coauthors. Intel is collaborating with MIT to use Gen for 3D pose estimation from the latter’s depth-sense cameras, and MIT’s Lincoln Laboratory is developing applications in aerial robotics for humanitarian relief and disaster response. Elsewhere, Gen is becoming increasingly central to an MIT-IBM Watson AI Lab project and the Department of Defense’s Defense Advanced Research Projects Agency’s (DARPA) ongoing Machine Common Sense initiative, which seeks to model human common sense at the level of an 18-month-old child.

“[Gen] allows a problem-solver to use probabilistic programming, and thus have a more principled approach to the problem, but not be limited by the choices made by the designers of the probabilistic programming system,” said director of research at Google Peter Norvig, who wasn’t involved in the research, in a statement. “General-purpose programming languages … have been successful because they … make the task easier for a programmer, but also make it possible for a programmer to create something brand new to efficiently solve a new problem. Gen does the same for probabilistic programming.”



from VentureBeat https://venturebeat.com
via IFTTT

George Will's Lonely Battle Against Republican Nationalism and Democratic Socialism

http://bit.ly/2ZDMBki

Three years ago this month, George Will, America's foremost conservative newspaper columnist, officially quit the Republican Party over its acquiescence to Donald Trump. "This is not my party," he said then. It's even less so now.

Yet the erudite author and television commentator is not ready to give up on conservatism just yet. In his career-punctuating new book The Conservative Sensibility, Will makes the forceful argument that the natural rights-based classical liberalism of James Madison is the antidote to authoritarianism found on both the Trumpian right and progressive left.

In an interview with Reason's Matt Welch, Will talks about the importance of rehabilitating America's withered constitutional architecture, ponders what the punditry class got wrong in 2016, and reminisces about what it was like for a conservative columnist to criticize an erratic Republican president way back in 1973.

Edited by Ian Keyser. Intro by Todd Krainin. Camera by Jim Epstein.

Photo Credit: Brian Cahn/ZUMA Press/Newscom

'Running Waters' by Jason Shaw is licensed under CC BY 3.0

Subscribe to our YouTube channel.

Like us on Facebook.

Follow us on Twitter.

Subscribe to our podcast at iTunes.

 

 

 



from Hit & Run : Reason Magazine https://reason.com
via IFTTT

The New York Times course to teach its reporters data skills is now open-source

https://nyti.ms/2WBxUfU


And how your reporters can learn to love them, too

Illustration by Simone Noronha

Five years ago, a lot of people in journalism were asking, wide-eyed, “Should journalists learn to code?”

The consensus for most journalists was: probably not. And over time, the “should we code?” questions quieted down.

But, some people did learn. At The New York Times and elsewhere, coder-journalists have mashed databases to discover wrongdoing, designed immersive experiences that transport readers to new places and created tools that change the way we work.

Even with some of the best data and graphics journalists in the business, we identified a challenge: data knowledge wasn’t spread widely among desks in our newsroom and wasn’t filtering into news desks’ daily reporting.

Yet fluency with numbers and data has become more important than ever. While journalists once were fond of joking that they got into the field because of an aversion to math, numbers now comprise the foundation for beats as wide ranging as education, the stock market, the Census and criminal justice. More data is released than ever before — there are nearly 250,000 datasets on data.gov alone — and increasingly, government, politicians and companies try to twist those numbers to back their own agendas.

Last year, The Times’s Digital Transition team decided to look at how we could help grow beat reporters’ data knowledge to help cover these issues. Our team’s mission is to “continuously transform the newsroom,” and with a focus on training all desks, we were well positioned to address these issues on a large scale.

We wanted to help our reporters better understand the numbers they get from sources and government, and give them the tools to analyze those numbers. We wanted to increase collaboration between traditional and non-traditional journalists for stories like this visual examination of New York football, our campaign finance coverage and this in-depth look at where the good jobs are. And with more competition than ever, we wanted to empower our reporters to find stories lurking in the hundreds of thousands of databases maintained by governments, academics and think tanks. We wanted to give our reporters the tools and support necessary to incorporate data into their everyday beat reporting, not just in big and ambitious projects.

Data Training Program

After talking to leaders in our newsroom about how we could support journalists who wanted to obtain more data skills, we ran two pilot training programs, then expanded into an intensive boot camp open to reporters on all desks. Over the past 18 months, we’ve trained more than 60 reporters and editors, who have gone on to produce dozens of data stories for The Times.

[Learn how 5 reporters in our newsroom used data skills from the training to create groundbreaking work.]

The training is rigorous. Based in Google Sheets, it starts with beginner skills like sorting, searching and filtering; progresses to pivot tables; and ends with advanced data cleaning skills such as if and then statements and vlookup. Along the way, we discuss data-friendly story structures, data ethics and how to bulletproof data stories. We also invite speakers from around The Times, including the CAR team, Graphics and the Interactive News team, to talk about how they report with data.

Over a period of three weeks, the class meets for two hours every morning. This includes time for reporters to work on data-driven stories and apply the skills they’ve learned in the course to their own beats. We also train the reporters’ editors. In separate lunch sessions, they come together to discuss tips for editing data stories, common pitfalls to avoid and advice for working with reporters.

Each time we run the training, we have two or three times as many sign-ups as we have slots. As a result, we instituted a selection process where reporters are nominated by the leaders of their desks. From that group, we build the cohort with the aim to have a diverse mix of desks, beats, gender, race, reporting timelines, ages and tenures at The Times. Once selected, reporters commit to attend all sessions and come prepared with data-driven story ideas for their beats.

To help reporters with their first data stories, we support them with on-demand data help for the two months after the training. Every month, we gather all the groups together for a lunch and learn.

Releasing Our Materials

While we recognize most publications aren’t able to offer their reporters a three-week data training, we know that increasing data skills is hardly a Times-specific need. Even in smaller newsrooms, making time to teach someone data skills has benefits in the long run. But it can be difficult and time-consuming to build out proper materials, especially if developing training programs isn’t your sole job.

So, we’ve decided to share our materials in the hopes that students, professors or journalists at other publications might find them useful.

Over the last four rounds of data training, Digital Transition has amassed dozens of spreadsheets, worksheets, cheat sheets, slide decks, lesson plans and more, created by me, my fellow Digital Transition editor Elaine Chen and various speakers around The Times.

View them here.

Here’s an overview of what’s included in these files:

  • Training Information: A list of skills included in the training, both technical and things like data ethics, as well as the schedule from our last round of training. The schedule shows how we switch off between “core skills” sessions that show new skills with a fake dataset, “practice” which applies those skills and “story sessions” which help reporters apply the skills they’ve learned to stories they are working on now.
  • Data Sets: Some of our data sets and worksheet activities from practice sessions. We’ve organized them into three difficulty levels. Note that we’ve altered these datasets in order to relate closely to what we cover each week, so please don’t use them in your reporting.
  • Cheat Sheets: Taken from each core skill session, covering most of the technical skills we practice with reporters. They are meant as reference materials for reporters as they practice and apply skills. Since the training is in Google Sheets, the technical prompts are for that program.
  • Tip Sheets: Some of the more random and non-technical skills we cover, such as how to bulletproof your work, how to brainstorm with data and how to think creatively while writing with data.

If you have any questions about these materials, feel free to contact me at Lindsey.Cook@nytimes.com.


Lindsey Cook is an editor for digital storytelling and training at The Times. Previously, she worked as the data editor for news at U.S. News & World Report. She has taught journalism at American University and is a graduate of the University of Georgia.



from Hacker News http://bit.ly/YV9WJO
via IFTTT

Introducing Google Research Football: A Novel Reinforcement Learning Environment

http://bit.ly/31gL3yn

Posted by Karol Kurach, Research Lead and Olivier Bachem, Research Scientist, Google Research, Zürich

The goal of reinforcement learning (RL) is to train smart agents that can interact with their environment and solve complex tasks, with real-world applications towards robotics, self-driving cars, and more. The rapid progress in this field has been fueled by making agents play games such as the iconic Atari console games, the ancient game of Go, or professionally played video games like Dota 2 or Starcraft 2, all of which provide challenging environments where new algorithms and ideas can be quickly tested in a safe and reproducible manner. The game of football is particularly challenging for RL, as it requires a natural balance between short-term control, learned concepts, such as passing, and high level strategy.

Today we are happy to announce the release of the Google Research Football Environment, a novel RL environment where agents aim to master the world’s most popular sport—football. Modeled after popular football video games, the Football Environment provides a physics based 3D football simulation where agents control either one or all football players on their team, learn how to pass between them, and manage to overcome their opponent’s defense in order to score goals. The Football Environment provides several crucial components: a highly-optimized game engine, a demanding set of research problems called Football Benchmarks, as well as the Football Academy, a set of progressively harder RL scenarios. In order to facilitate research, we have released a beta version of the underlying open-source code on Github.


Football Engine
The core of the Football Environment is an advanced football simulation, called Football Engine, which is based on a heavily modified version of Gameplay Football. Based on input actions for the two opposing teams, it simulates a match of football including goals, fouls, corner and penalty kicks, and offsides. The Football Engine is written in highly optimized C++ code, allowing it to be run on off-the-shelf machines, both with GPU and without GPU-based rendering enabled. This allows it to reach a performance of approximately 25 million steps per day on a single hexa-core machine.
The Football Engine is an advanced football simulation that supports all the major football rules such as kickoffs (top left), goals (top right), fouls, cards (bottom left), corner and penalty kicks (bottom right), and offside.
The Football Engine has additional features geared specifically towards RL. First, it allows learning from both different state representations, which contain semantic information such as the player’s locations, as well as learning from raw pixels. Second, to investigate the impact of randomness, it can be run in both a stochastic mode (enabled by default), in which there is randomness in both the environment and opponent AI actions, and in a deterministic mode, where there is no randomness. Third, the Football Engine is out of the box compatible with the widely used OpenAI Gym API. Finally, researchers can get a feeling for the game by playing against each other or their agents, using either keyboards or gamepads.

Football Benchmarks
With the Football Benchmarks, we propose a set of benchmark problems for RL research based on the Football Engine. The goal in these benchmarks is to play a “standard” game of football against a fixed rule-based opponent that was hand-engineered for this purpose. We provide three versions: the Football Easy Benchmark, the Football Medium Benchmark, and the Football Hard Benchmark, which only differ in the strength of the opponent.

As a reference, we provide benchmark results for two state-of-the-art reinforcement learning algorithms: DQN and IMPALA, which both can be run in multiple processes on a single machine or concurrently on many machines. We investigate both the setting where the only rewards provided to the algorithm are the goals scored and the setting where we provide additional rewards for moving the ball closer to the goal.

Our results indicate that the Football Benchmarks are interesting research problems of varying difficulties. In particular, the Football Easy Benchmark appears to be suitable for research on single-machine algorithms while the Football Hard Benchmark proves to be challenging even for massively distributed RL algorithms. Based on the nature of the environment and the difficulty of the benchmarks, we expect them to be useful for investigating current scientific challenges such as sample-efficient RL, sparse rewards, or model based RL.
The average goal difference of agent versus opponent at different difficulty levels for different baselines. The Easy opponent can be beaten by a DQN agent trained for 20 million steps, while the Medium and Hard opponents require a distributed algorithm such as IMPALA that is trained for 200 million steps.
Football Academy & Future Directions
As training agents for the full Football Benchmarks can be challenging, we also provide Football Academy, a diverse set of scenarios of varying difficulty. This allows researchers to get the ball rolling on new research ideas, allows testing of high-level concepts (such as passing), and provides a foundation to investigate curriculum learning research ideas, where agents learn from progressively harder scenarios. Examples of the Football Academy scenarios include settings where agents have to learn how to score against the empty goal, where they have to learn how to quickly pass between players, and where they have to learn how to execute a counter-attack. Using a simple API, researchers can further define their own scenarios and train agents to solve them.

Top: A successful policy that runs towards the goal (as required, since a number of opponents chase our player) and scores against the goal-keeper. Second: A beautiful way to drive and finish a counter-attack. Third: A simple way to solve a 2-vs-1 play. Bottom: The agent scores after a corner kick.
The Football Benchmarks and the Football Academy consider the standard RL setup, in which agents compete against a fixed opponent, i.e., where the opponent can be considered a part of the environment. Yet, in reality, football is a two-player game where two different teams compete and where one has to adapt to the actions and strategy of the opposing team. The Football Engine provides a unique opportunity for research into this setting and, once we complete our on-going effort to implement self-play, even more interesting research settings can be investigated.

Acknowledgments
This project was undertaken together with Anton Raichuk, Piotr Stańczyk, Michał Zając, Lasse Espeholt, Carlos Riquelme, Damien Vincent‎, Marcin Michalski, Olivier Bousquet‎ and Sylvain Gelly at Google Research, Zürich. We also wish to thank Lucas Beyer, Nal Kalchbrenner, Tim Salimans and the rest of the Google Brain team for helpful discussions, comments, technical help and code contributions. Finally, we would like to thank Bastiaan Konings Schuiling, who authored and open-sourced the original version of this game.


from Google Research Blog http://bit.ly/2Dcd8Na
via IFTTT

One [power] law to rule them all?

http://bit.ly/2JVBr6O

« previous post | next post »

Power-law distributions seem to be everywhere, and not just in word-counts and whale whistles. Most people know that Vilfredo Pareto  found them in the distribution of wealth, two or three decades before Udny Yule showed that stochastic processes like those in evolution lead to such distributions, and George Kingsley Zipf found his eponymous law in word frequencies. Since then, power-law distributions have been found all over the place — Wikipedia lists

… the sizes of craters on the moon and of solar flares, the foraging pattern of various species, the sizes of activity patterns of neuronal populations, the frequencies of words in most languages, frequencies of family names, the species richness in clades of organisms, the sizes of power outages, criminal charges per convict, volcanic eruptions, human judgements of stimulus intensity …

My personal favorite is the noises it makes when you crumple something up, as discussed by Eric Kramer and Alexander Lobkovsky, "Universal Power Law in the Noise from a Crumpled Elastic Sheet", 1995 ) referenced in "Zipf and the general theory of wrinkling", 11/15/2003).

Contradicting the Central Limit Theorem's implications for what is "normal", power law distributions seem to be everywhere you look.

Or maybe not?

Many of the alleged "power-law" examples are actually log-normal, or some other heavy-tailed distribution, according to a paper by Aaron Clauset, Cosma Rohilla Shalizi, and M. E. J. Newman, "Power-law distributions in empirical data" (SIAM Review 2009). As an alternative to the paper, you can read Cosma's blog post "So You Think You Have a Power Law — Well Isn't That Special?", 6/15/2007; or this summary of the results in "Cozy Catastrophes", 2/15/2012:

In our paper, we looked at 24 quantities which people claimed showed power law distributions. Of these, there were seven cases where we could flat-out reject a power law, without even having to consider an alternative, because the departures of the actual distribution from even the best-fitting power law was much too large to be explained away as fluctuations. (One of the wonderful thing about a stochastic model is that it tells you how big its own errors should be.) In contrast, there was only one data set where we could rule out the log-normal distribution. […]

We found exactly one case where the statistical evidence for the power-law was "good", meaning that "the power law is a good fit and that none of the alternatives considered is plausible", which was Zipf's law of word frequency distributions. We were of course aware that when people claim there are power laws, they usually only mean that the tail follows a power law. This is why all these comparisons were about how well the different distributions fit the tail, excluding the body of the data. We even selected where "the tail" begins to maximize the fit to a power law for each case. Even so, there was just this one case where the data compelling support a power law tail.

Links to Cosma's other posts on the topic can be found in "Power laws and other heavy-tailed distributions", and a recent discussion can be found in "Power laws", 5/30/2018.

It's worth noting that there are many random processes that can be shown mathematically to produce power-law distributions, at least in the tails — as Cosma puts it,

there turn out to be nine and sixty ways of constructing power laws, and every single one of them is right, in that it does indeed produce a power law. Power laws turn out to result from a kind of central limit theorem for multiplicative growth processes, […].

So in a way it's surprising that so many of the power-law claims turn out to be bogus or at least doubtful.

And what about the claims of power-law distributions in the vocalizations of whales, dolphins, and other animals? I'm not sure. But the fact that the key review paper doesn't list Clauset et al. among its hundreds of references, and that none of the relevant papers seems to apply the tests described in Clauset et al., or to offer links to their underlying data, makes me suspicious. (If any readers know of papers that apply the needed tests, or offer datasets suitable for checking the claims, please let me know.)

And as a footnote:  The nature of processes that generate power-law distributions was the topic of what might be the most unpleasant debate in the history of mathematical modeling. This battle took place between 1955 and 1961, and the combatants were Herbert Simon and Benoit Mandelbrot. See "The long tail of religious studies" for links and details.

Update — see also Heathcote, Brown, and  Mewhort, "The power law repealed: The case for an exponential law of practice", Psychonomic Bulletin & Review 2000:

The power function is treated as the law relating response time to practice trials. However, the evidence for a power law is flawed, because it is based on averaged data. We report a survey that assessed the form of the practice function for individual learners and learning conditions in paradigms that have shaped theories of skill acquisition. We fit power and exponential functions to 40 sets of data representing 7,910 learning series from 475 subjects in 24 experiments. The exponential function fit better than the power function in all the unaveraged data sets. Averaging produced a bias in favor of the power function. 

And also Michael Ramscar, "Source codes in human communication", preprint 3/22/2019.

 

June 2, 2019 @ 6:50 am · Filed by under Computational linguistics

Permalink



from Hacker News http://bit.ly/YV9WJO
via IFTTT

The first trailer for Netflix’s Dark Crystal: Age of Resistance is astonishing

http://bit.ly/2I9kP8j

The first trailer has arrived for The Dark Crystal: Age of Resistance, Netflix’s upcoming prequel to the 1982 Jim Henson / Frank Oz movie The Dark Crystal. It makes the 10-hour series look astonishingly faithful to the original film. The visuals have been updated for a digital age — now, instead of slogging through swamps and across fields full of strange puppet creatures, the characters are navigating immense, brightly colored imaginary spaces — but the film’s primary species, the diminutive elf-like Gelflings and the corrupted, bird-like Skeksis, look practically unchanged since 1982.

The series, based on J.M. Lee’s recent Dark Crystal companion novels Shadows of the Dark Crystal and Song of the Dark Crystal, appears to take place in the years immediately after an event described in the 1982 movie. The Skeksis have cracked the Dark Crystal, a mysterious power source at the heart of their castle, and they are gradually turning from guardians of the planet Thra into decadent monsters obsessed with their own longevity and power. The books chronicle the slow awakening of the Gelflings, who don’t realize the Skeksis are targeting them until their situation is dire.

Taron Egerton, Anya Taylor-Joy, and Nathalie Emmanuel voice the series’s primary characters. A Netflix statement reveals that other Gelflings are voiced by film stars, including Helena Bonham Carter, Natalie Dormer, Eddie Izzard, Theo James, Toby Jones, Shazad Latif, Gugu Mbatha-Raw, Mark Strong, and Alicia Vikander. Among the actors voicing the Skeksis are Harvey Fierstein, Mark Hamill, Ralph Ineson, Jason Isaacs, Keegan-Michael Key, Ólafur Darri Ólafsson, Simon Pegg, and Andy Samberg. The “keeper of secrets” Aughra, originally voiced by Frank Oz, will be played by Donna Kimball.

What’s most compelling about this trailer, though, is the stunning fidelity to the original film, which suggests the project’s authors — including film director Louis Leterrier, PBS producers Lisa Henson (Jim Henson’s daughter) and Halle Stanford, and creators Jeffrey Addiss and Will Matthews — have a deep respect for the original source material and aren’t just out for a quick nostalgia cash-grab.

The series premieres on Netflix on August 30th, 2019.



from The Verge http://bit.ly/1jLudMg
via IFTTT