Quantum supremacy: the gloves are off

https://ift.tt/2obLjjD

Links:
Google paper in Nature
New York Times article
IBM paper and blog post responding to Google’s announcement

When Google’s quantum supremacy paper leaked a month ago—not through Google’s error, but through NASA’s—I had a hard time figuring out how to cover the news here. I had to say something; on the other hand, I wanted to avoid any detailed technical analysis of the leaked paper, because I was acutely aware that my colleagues at Google were still barred by Nature‘s embargo rules from publicly responding to anything I or others said. (I was also one of the reviewers for the Nature paper, which put additional obligations on me.)

I ended up with Scott’s Supreme Quantum Supremacy FAQ, which tried to toe this impossible line by “answering general questions about quantum supremacy, and the consequences of its still-hypothetical achievement, in light of the leak.” It wasn’t an ideal solution—for one thing, because while I still regard Google’s sampling experiment as a historic milestone for our whole field, there are some technical issues, aspects that subsequent experiments (hopefully coming soon) will need to improve. Alas, the ground rules of my FAQ forced me to avoid such issues, which caused some readers to conclude mistakenly that I didn’t think there were any.

Now, though, the Google paper has come out as Nature‘s cover story, at the same time as there have been new technical developments—most obviously, the paper from IBM (see also their blog post) saying that they could simulate the Google experiment in 2.5 days, rather than the 10,000 years that Google had estimated.

(Yesterday I was deluged by emails asking me “whether I’d seen” IBM’s paper. As a science blogger, I try to respond to stuff pretty quickly when necessary, but I don’t—can’t—respond in Twitter time.)

So now the gloves are off. No more embargo. Time to address the technical stuff under the hood—which is the purpose of this post.

I’m going to assume, from this point on, that you already understand the basics of sampling-based quantum supremacy experiments, and that I don’t need to correct beginner-level misconceptions about what the term “quantum supremacy” does and doesn’t mean (no, it doesn’t mean scalability, fault-tolerance, useful applications, breaking public-key crypto, etc. etc.). If this is not the case, you could start (e.g.) with my FAQ, or with John Preskill’s excellent Quanta commentary.

Without further ado:

(1) So what about that IBM thing? Are random quantum circuits easy to simulate classically?

OK, so let’s carefully spell out what the IBM paper says. They argue that, by commandeering the full attention of Summit at Oak Ridge National Lab, the most powerful supercomputer that currently exists on Earth—one that fills the area of two basketball courts, and that (crucially) has 250 petabytes of hard disk space—one could just barely store the entire quantum state vector of Google’s 53-qubit Sycamore chip in hard disk.  And once one had done that, one could simulate the chip in ~2.5 days, more-or-less just by updating the entire state vector by brute force, rather than the 10,000 years that Google had estimated on the basis of my and Lijie Chen’s “Schrödinger-Feynman algorithm” (which can get by with less memory).

The IBM group understandably hasn’t actually done this yet—even though IBM set it up, the world’s #1 supercomputer isn’t just sitting around waiting for jobs! But I see little reason to doubt that their analysis is basically right. I don’t know why the Google team didn’t consider how such near-astronomical hard disk space would change their calculations, probably they wish they had.

I find this to be much, much better than IBM’s initial reaction to the Google leak, which was simply to dismiss the importance of quantum supremacy as a milestone. Designing better classical simulations is precisely how IBM and others should respond to Google’s announcement, and how I said a month ago that I hoped they would respond. If we set aside the pass-the-popcorn PR war (or even if we don’t), this is how science progresses.

But does IBM’s analysis mean that “quantum supremacy” hasn’t been achieved? No, it doesn’t—at least, not under any definition of “quantum supremacy” that I’ve ever used. Recall that the Sycamore chip took about 3 minutes to generate enough samples to pass the “linear cross-entropy benchmark,” the statistical test that Google applies to the outputs of its device. Three minutes versus 2.5 days is still a quantum speedup by a factor of 1200. More relevant, I think, is to compare the number of “elementary operations.” Let’s generously count a FLOP (floating-point operation) as the equivalent of a quantum gate. Then by my estimate, we’re comparing ~5×109 quantum gates against ~2×1020 FLOPs—a quantum speedup by a factor of ~40 billion.

Even that, though, is arguably not the right comparison. As far as we know today, the Summit supercomputer would need the full 2.5 days, or 2×1020 FLOPs, even to generate one sample from the output distribution of an ideal, randomly-generated 53-qubit quantum circuit. Google’s Sycamore chip needs about 40 microseconds to generate each sample. Now admittedly, the samples are extremely noisy: indeed, they’re drawn from a probability distribution that looks like 0.998U+0.002D, where U is the uniform distribution over 53-bit strings, and D is the hard distribution we care about. But notice that if we took 500=1/0.002 samples from this noisy distribution, then with high probability at least one of them would’ve been drawn from D. Sycamore takes 0.02 seconds to generate 500 samples—which compared to 2.5 days, would imply a quantum speedup by a factor of ~11 million, or a factor of ~4×1014 in “number of elementary operations.”

For me, though, the broader point is that neither party here—certainly not IBM—denies that the top-supercomputers-on-the-planet-level difficulty of classically simulating Google’s 53-qubit programmable chip really is coming from the exponential character of the quantum states in that chip, and nothing else. That’s what makes this back-and-forth fundamentally different from the previous one between D-Wave and the people who sought to simulate its devices classically. The skeptics, like me, didn’t much care what speedup over classical benchmarks there was or wasn’t today: we cared about the increase in the speedup as D-Wave upgraded its hardware, and the trouble was we never saw a convincing case that there would be one. I’m a theoretical computer scientist, and this is what I believe: that after the constant factors have come and gone, what remains are asymptotic growth rates.

In the present case, while increasing the circuit depth won’t evade IBM’s “store everything to hard disk” strategy, increasing the number of qubits will. If Google, or someone else, upgraded from 53 to 55 qubits, that would apparently already be enough to exceed Summit’s 250-petabyte storage capacity. At 60 qubits, you’d need 33 Summits. At 70 qubits, enough Summits to fill a city … you get the idea.

From the beginning, it was clear that quantum supremacy would not be a milestone like the moon landing—something that’s achieved in a moment, and is then clear to everyone for all time. It would be more like eradicating measles: it could be achieved, then temporarily unachieved, then re-achieved. For by definition, quantum supremacy all about beating something—namely, classical computation—and the latter can, at least for a while, fight back.

As Boaz Barak put it to me, the current contest between IBM and Google is analogous to Kasparov versus Deep Blueexcept with the world-historic irony that IBM is playing the role of Kasparov! In other words, Kasparov can put up a heroic struggle, during a “transitional period” that lasts a year or two, but the fundamentals of the situation are that he’s toast. If Kasparov had narrowly beaten Deep Blue in 1997, rather than narrowly losing, the whole public narrative would likely have been different (“humanity triumphs over computers after all!”). Yet as Kasparov himself well knew, the very fact that the contest was close meant that, either way, human dominance was ending.

Let me leave the last word on this to friend-of-the-blog Greg Kuperberg, who graciously gave me permission to quote his comments about the IBM paper.

I’m not entirely sure how embarrassed Google should feel that they overlooked this.   I’m sure that they would have been happier to anticipate it, and happier still if they had put more qubits on their chip to defeat it.   However, it doesn’t change their real achievement.

I respect the IBM paper, even if the press along with it seems more grouchy than necessary.   I tend to believe them that the Google team did not explore all avenues when they said that their 53 qubits aren’t classically simulable.   But if this is the best rebuttal, then you should still consider how much Google and IBM still agree on this as a proof-of-concept of QC.   This is still quantum David vs classical Goliath, in the extreme.   53 qubits is in some ways still just 53 bits, only enhanced with quantum randomness.  To answer those 53 qubits, IBM would still need entire days of computer time with the world’s fastest supercomputer, a 200-petaflop machine with hundreds of thousands of processing cores and trillions of high-speed transistors.   If we can confirm that the Google chip actually meets spec, but [we need] this much computer power to do it, then to me that’s about as convincing a larger quantum supremacy demonstration that humanity can no longer confirm at all.

Honestly, I’m happy to give both Google and IBM credit for helping the field of QC, even if it is the result of a strange dispute.

I should mention that, even before IBM’s announcement, Johnnie Gray, a postdoc at Imperial College, gave a talk (abstract here) at Caltech’s Institute for Quantum Information with a proposal for a different faster way to classically simulate quantum circuits like Google’s—in this case, by doing tensor network contraction more cleverly. Unlike both IBM’s proposed brute-force simulation, and the Schrödinger-Feynman algorithm that Google implemented, Gray’s algorithm (as far as we know now) would need to be repeated k times if you wanted k independent samples from the hard distribution. Partly because of this issue, Gray’s approach doesn’t currently look competitive for simulating thousands or millions of samples, but we’ll need to watch it and see what happens.

(2) Direct versus indirect verification.

The discussion of IBM’s proposed simulation brings us to a curious aspect of the Google paper—one that was already apparent when Nature sent me the paper for review back in August. Namely, Google took its supremacy experiments well past the point where even they themselves knew how to verify the results, by any classical computation that they knew how to perform feasibly (say, in less than 10,000 years).

So you might reasonably ask: if they couldn’t even verify the results, then how did they get to claim quantum speedups from those experiments? Well, they resorted to various gambits, which basically involved estimating the fidelity on quantum circuits that looked almost the same as the hard circuits, but happened to be easier to simulate classically, and then making the (totally plausible) assumption that that fidelity would be maintained on the hard circuits. Interestingly, they also cached their outputs and put them online (as part of the supplementary material to their Nature paper), in case it became feasible to verify them in the future.

Maybe you can now see where this is going. From Google’s perspective, IBM’s rainstorm comes with a big silver lining. Namely, by using Summit, hopefully it will now be possible to verify Google’s hardest (53-qubit and depth-20) sampling computations directly! This should provide an excellent test, since not even the Google group themselves would’ve known how to cheat and bias the results had they wanted to.

This whole episode has demonstrated the importance, when doing a sampling-based quantum supremacy experiment, of going deep into the regime where you can no longer classically verify the outputs, as weird as that sounds. Namely, you need to leave yourself a margin, in the likely event that the classical algorithms improve!

Having said that, I don’t mind revealing at this point that the lack of direct verification of the outputs, for the largest reported speedups, was my single biggest complaint when I reviewed Google’s Nature submission. It was because of my review that they added a paragraph explicitly pointing out that they did do direct verification, using something like a million cores running for something like a month, for a smaller quantum speedup (merely a million times faster than a Schrödinger-Feynman simulation running on a million cores, rather than two billion times faster).

(3) The asymptotic hardness of spoofing Google’s benchmark.

OK, but if Google thought that spoofing its test would take 10,000 years, using the best known classical algorithms running on the world’s top supercomputers, and it turns out instead that it could probably be done in more like 2.5 days, then how much else could’ve been missed? Will we find out next that Google’s benchmark can be classically spoofed in mere milliseconds?

Well, no one can rule that out, but we do have some reasons to think that it’s unlikely—and crucially, that even if it turned out to be true, one would just have to add 10 or 20 or 30 more qubits to make it no longer true. (We can’t be more definitive than that? Aye, such are the perils of life at a technological inflection point—and of computational complexity itself.)

The key point to understand here is that we really are talking about simulating a random quantum circuit, with no particular structure whatsoever. While such problems might have a theoretically efficient classical algorithm—i.e., one that runs in time polynomial in the number of qubits—I’d personally be much less surprised if you told me there was a polynomial-time classical algorithm for factoring. In the universe where amplitudes of random quantum circuits turn out to be efficiently computable—well, you might as well just tell me that P=PSPACE and be done with it.

Crucially, if you look at IBM’s approach to simulating quantum circuits classically, and Johnnie Gray’s approach, and Google’s approach, they could all be described as different flavors of “brute force.” That is, they all use extremely clever tricks to parallelize, shave off constant factors, make the best use of available memory, etc., but none involves any deep new mathematical insight that could roust BPP and BQP and the other complexity gods from their heavenly slumber. More concretely, none of these approaches seem to have any hope of “breaching the 2n barrier,” where n is the number of qubits in the quantum circuit to be simulated (assuming that the circuit depth is reasonably large). Mostly, they’re just trying to get down to that barrier.

Ah, but at the end of the day, we only believe that Google’s Sycamore chip is solving a classically hard problem because of the statistical test that Google applies to its outputs: the so-called “Linear Cross-Entropy Benchmark,” which I described in Q3 of my FAQ. And even if we grant that calculating the output probabilities for a random quantum circuit is almost certainly classically hard, and sampling the output distribution of a random quantum circuit is almost certainly classically hard—still, couldn’t spoofing Google’s benchmark be classically easy?

This last question is where complexity theory can contribute something to story. A couple weeks ago, UT undergraduate Sam Gunn and I adapted the hardness analysis from my and Lijie Chen’s 2017 paper “Complexity-Theoretic Foundations of Quantum Supremacy Experiments,” to talk directly about the classical hardness of spoofing the Linear Cross-Entropy benchmark. Our short paper about this should be on the arXiv later this week. Briefly, though, Sam and I show that if you had a sub-2n classical algorithm to spoof the Linear Cross-Entropy benchmark, then you’d also have a sub-2n classical algorithm that, given as input a random quantum circuit, could estimate a specific output probability (for example, that of the all-0 string) with variance at least slightly (say, Ω(2-3n)) better than that of the trivial estimator that just always guesses 2-n. Or in other words: spoofing Google’s benchmark is no easier than the general problem of nontrivially estimating amplitudes in random quantum circuits. Our result helps to explain why, indeed, neither IBM nor Johnnie Gray nor anyone else has suggested any attack that’s specific to Google’s Linear Cross-Entropy benchmark: they all simply attack the general problem of calculating the final amplitudes.

(4) Why use Linear Cross-Entropy at all?

In the comments of my FAQ, some people wondered why Google chose the Linear Cross-Entropy benchmark specifically—especially since they’d used a different benchmark (multiplicative cross-entropy, which unlike the linear version actually is a cross-entropy) in their earlier papers. I asked John Martinis this question, and his answer was simply that linear cross-entropy had the lowest variance of any estimator they tried. Since I also like linear cross-entropy—it turns out, for example, to be convenient for the analysis of my certified randomness protocol—I’m 100% happy with their choice. Having said that, there are many other choices of benchmark that would’ve also worked fine, and with roughly the same level of theoretical justification.

(5) Controlled-Z versus iSWAP gates.

Another interesting detail from the Google paper is that, in their previous hardware, they could implement a particular 2-qubit gate called the Controlled-Z. For their quantum supremacy demonstration, on the other hand, they modified their hardware to implement a different 2-qubit gate called the iSWAP. Now, the iSWAP has no known advantages over the Controlled-Z, for any applications like quantum simulation or Shor’s algorithm or Grover search. Why then did Google make the switch? Simply because, with certain classical simulation methods that they’d been considering, the simulation’s running time grows like 4 to the power of the number of iSWAP gates, but only like 2 to the power of the number of Controlled-Z gates! In other words, they made this engineering choice purely and entirely to make a classical simulation of their device sweat more. This seems totally fine and entirely within the rules to me. (Alas, the iSWAP versus Controlled-Z issue has no effect on a proposed simulation method like IBM’s.)

(6) Gil Kalai’s objections.

Over the past month, Shtetl-Optimized regular and quantum computing skeptic Gil Kalai has been posting one objection to the Google experiment after another on his blog. Unlike the IBM group and many of Google’s other critics, Gil completely accepts the centrality of quantum supremacy as a goal. Indeed, he’s firmly predicted for years that quantum supremacy could never be achieved for fundamental reasons—and he agrees that the Google result, if upheld, would refute his worldview. Gil also has no dispute with the exponential classical hardness of the problem that Google is solving.

Instead, Gil—if we’re talking not about “steelmanning” his views, but about what he himself actually said—has taken the position that the Google experiment must’ve been done wrong and will need to be retracted. He’s offered varying grounds for this. First he said that Google never computed the full histogram of probabilities with a smaller number of qubits (for which such an experiment is feasible), which would be an important sanity check. Except, it turns out they did do that, and it’s in their 2018 Science paper. Next he said that the experiment is invalid because the qubits have to be calibrated in a way that depends on the specific circuit to be applied. Except, this too turns out to be false: John Martinis explicitly confirmed for me that once the qubits are calibrated, you can run any circuit on them that you want. In summary, unlike the objections of the IBM group, so far I’ve found Gil’s objections to be utterly devoid of scientific interest or merit.

Update #1: Alas, I’ll have limited availability today for answering comments, since we’ll be grading the midterm exam for my Intro to Quantum Information Science course! I’ll try to handle the backlog tomorrow (Thursday).

Update #2: Aaannd … timed to coincide with the Google paper, last night the group of Jianwei Pan and Chaoyang Lu put up a preprint on the arXiv reporting a BosonSampling experiment with 20 photons (the previous record had been 6 photons). At this stage of the quantum supremacy race, many had of course written off BosonSampling—or said that its importance was mostly historical, in that it inspired Google’s random circuit sampling effort.  I’m thrilled to see BosonSampling itself take such a big leap; hopefully, this will eventually lead to a demonstration that BosonSampling was (is) a viable pathway to quantum supremacy as well.  And right now, with fault-tolerance still having been demonstrated in zero platforms, we need all the viable pathways we can get.  What an exciting day for the field.

This entry was posted on Wednesday, October 23rd, 2019 at 10:50 am and is filed under Announcements, Bell's Theorem? But a Flesh Wound!, Complexity, Quantum. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.



from Hacker News https://ift.tt/YV9WJO
via IFTTT

Stanford institute calls for $120 billion investment in U.S. AI ecosystem

https://ift.tt/360S6xJ


The Stanford University Institute for Human-Centered Artificial Intelligence is calling for the U.S. government to make a $120 billion investment in the nation’s AI ecosystem over the course of the next 10 years. The report calls efforts by the Trump administration like calling for near $1 billion in U.S. non-defense research and development spending in 2020 “encouraging, but not nearly enough.”

The national AI vision report specifically calls for $2 billion in annual spending to support entrepreneurs and expand innovation, $3 billion on education, and $7 billion on interdisciplinary research to discover breakthroughs in the latest AI.

The report was written by center directors John Etchemendy and Dr. Fei-Fei Li, calls underfunding of AI efforts threatens U.S. global leadership and is a “national emergency in the making.”

Li is a leader at the Stanford Computer Vision Lab, creator of ImageNet, and until last year, Li served as chief AI scientist for Google Cloud.

Failure to take decisive action could also upend the global economy, the report reads.

“If guided properly, the Age of AI could usher in an era of productivity and prosperity for all. PWC estimates AI will deliver $15.7 trillion to the global economy by 2030. However, if we don’t harness it responsibly and share the gains equitably, it will lead to greater concentrations of wealth and power for the elite few who usher in this new age — and poverty, powerlessness and a lost sense of purpose for the global majority,” the report reads. “The potential financial advantages of AI are so great, and the chasm between AI haves and have-nots so deep, that the global economic balance as we know it could be rocked by a series of catastrophic tectonic shifts.”

Other calls for growth in AI research and development funding include Computing Community Consortium proposed a range of funding priorities as part of a 20-year R&D roadmap such as research into personalized education, lifetime AI assistants, and the establishment of a national research centers.

In May, legislation proposed in the U.S. Senate also called for the creation of national AI center and for $2.2 billion in annual funding over the course of the next 5 years.



from VentureBeat https://venturebeat.com
via IFTTT

Why 'Free College' Is a Terrible Idea

https://ift.tt/32FeAC5

Michael Gamez, 22, has wanted to work on cars since he was a kid, just like his father and grandfather. He fixed up and sold his first used car when he was 14. "It felt really good to build something up and sell it for a profit," says Gamez.

But his teachers conditioned him to equate a college degree with success. So he enrolled at the University of California, Irvine, with a plan to major in mechanical engineering. During his sophomore year, Gamez dropped out because he realized that he was on the wrong path.

Sen. Bernie Sanders (I–Vt.)and Sen. Elizabeth Warren (D–Mass.) have promised that, if elected, they'll make public college tuition-free and wipe clear federal student loan debt, which in the U.S. tops $1.5 trillion. Their claim is that making college universal will lead to higher productivity and more economic opportunity for people like Gamez.

"If you make college free, then there's going to be so many [degrees] floating around that if you want to get a better job, then you're going to need to go and get some supplemental degree," says Bryan Caplan, an economist at George Mason University and author of The Case Against Education. He's skeptical that professors like him have much to offer most students.

"We're spending too much time and money on education because most of what you learn in school you will never use after the final exam," says Caplan. "If you just calmly compare what we're studying to what we really do, the connection is shockingly weak."

Caplan says that most people attend college as a way to signal to prospective employers that they're reasonably intelligent, conscientious, and conformist.

"The signaling story is mostly that our society says that you're supposed to graduate, and if you're supposed to graduate, the failure to graduate signals non-conformity," says Caplan. "People that are willing to just bite their tongues and suffer through it are the ones who are also going to be good at doing that once they get a job."

Caplan's case rests partly on the so-called sheepskin effect, named for the material on which diplomas were once printed.

Studies of the earnings of college graduates reveal that the average salary increase for completing the last year of college is on average more than double that of completing the first three, implying that it's the fortitude to obtain the degree—not the knowledge gained—that explains the boost in compensation.

"The usual view, called the human capital view, says that basically all of what's going on in schools, is that they are pouring useful skills into you," says Caplan. "What I'm saying is the main payoff you're getting from school is that you're getting certified, you're getting stamped. You are, in other words, getting what you need to convince employers that you are a good bet."

Instead of college, Leah Wilczewski, 21, enrolled in Praxis, a one-year job skills program focusing on communication, marketing, and other jobs. It cost $12,000 but included a 6-month paid apprenticeship worth $16,000, meaning she'll finish the program $4,000 in the black.

Wilczewski is in the middle of her apprenticeship at Impossible Foods, the Bay Area company that sells a meatless hamburger.

"I feel as if being in Praxis and being able to nail a job that typically requires four years of school, if not more…it's like, okay, with that knowledge, what else can I do?" says Wilczewski.

After Michael Gamez dropped out of UC Irvine, he enrolled in an auto mechanic trade school while also working at Pep Boys. Then he applied for and received a $12,400 scholarship from Mike Rowe Works, which helps young people looking to enter the skilled trades

From there, he entered a three-month training program with BMW and a day after finishing began his job as a high-level technician at BMW of Beverly Hills.

"Now that I work with cars…I get excited to go to work," says Gamez. "I feel like a lot of people, they get surprised when I tell them the amount of money that a mechanic or a technician can make at a dealership."

Even though it's possible to acquire the necessary skills to make a good living without attending college, enrollment at 4-year universities has stayed steady for the past 10 years, and an Economics of Education Review study by Nicholas Turner found that every dollar of federal aid spending crowds out about 83 cents of institutional aid. Such trends leave Caplan skeptical that enrollment will fall anytime soon despite the increasing availability of online alternatives.

"If you've got that kind of guaranteed customer base where the taxpayers have no choice in whether or not the money's going to be spent and the government hands it over to you, then you're going to be fine," says Caplan.

As for Wilczewski, she has two more months left in her apprenticeship and is hopeful that Impossible Foods will keep her on in the sales department. Gamez hopes that working for BMW is a first step towards eventually owning his own shop.

Produced by Zach Weissmueller. Camera by John Osterhoudt, Alexis Garcia, Jim Epstein, Todd Krainin, and Weissmueller.

Photo credits: Anthony Behar/Sipa USA/Newscom, Bastiaan Slabbers/ZUMA Press/Newscom, Jack Kurtz/ZUMA Press/Newscom.

Subscribe to our YouTube channel.

Like us on Facebook.

Follow us on Twitter.

Subscribe to our podcast at iTunes.



from Hit & Run : Reason Magazine https://reason.com
via IFTTT

Are Electric Cars Good for the Environment?

https://ift.tt/2pRfJYH

This article is Part 5 of an 11-part series analysis of Tesla, Elon Musk and EV Revolution. You can read other parts here.

My wife loves driving the Model 3, not for all the selfish reasons I like to drive it (it is fast and quite the iPad on wheels) but because she feels she helps the environment. Is she right?

Unlike an ICE car, which takes fuel stored in the gas tank, combusts it in the engine, and thus creates kinetic energy, Tesla takes electricity stored in the battery pack and converts it directly into kinetic energy. That’s a very clean and quiet process. However, the electricity that magically appears in our electrical outlets is not a gift from Thor, the thunder god; it was generated somewhere and transmitted to us.

As I write this, I am slightly disturbed by how the topic I am about to discuss has been politicized. I am not going to debate global warming here, but let’s at least agree that an excess of carbon dioxide (CO2) and carbon monoxide (CO) is bad for you and me, and for the environment. If you disagree with me, start an ICE car in your garage, roll down the windows, and sit there for about 20 minutes. Actually, please don’t, because you’ll die. So let’s agree that a billion cars emitting CO and CO2 is not good and that if we emit less CO and CO2 it is good for air quality.

Roughly two-thirds of the electricity generated in the U.S. is sourced from fossil fuels. The good news is that only half of that comes from coal; the other half comes from natural gas, which produces half as much CO2 as coal (though it has its own side effects – it leaks methane). Another 20% of U.S. energy comes from nuclear, which produces zero carbon emissions. The remaining 17% comes from “green” sources, such as hydro (7%), wind (6.6%), and solar (1.7%).

So if tomorrow everyone in the U.S. switched to an EV, and our electrical grid was able to handle it, we’d instantly cut our nation’s CO2 emissions by more than a third – that’s a good thing.

Aside from all the reasons above (including going from 0 to 60 mph in 4.4 seconds), the reason I am a big fan of electric is that it gives us choices. Gasoline cars run either on oil or on oil. Electric cars open the door for alternative energy sources. That flexibility means oil might stop being the commodity that dictates our geopolitics, and that could mean we’ll have fewer wars.

There are alternatives to fossil fuels that have much less impact on the environment – and on you and me. There is nuclear energy, for one. To me, this is a no-brainer. Nuclear power plants have little environmental footprint – they spit out steam. But we have had Chernobyl, Three Mile Island, and Fukushima. It is unclear how many people died in those disasters, because the death toll estimates range from a few dozen who died from direct exposure to thousands who died from cancer caused by radiation.

If cooler minds had prevailed, we would not be on the fourth generation of nuclear reactors but the four hundredth. Nuclear should be our core energy source: It is cheap; it produces very little CO2; it provides stable, predictable output; and the latest versions are safe (they cool themselves down if there is loss of power). However, what I have learned in investing is that what I think should happen doesn’t matter; only what will happen matters. Here is the good news: Nuclear production is expected to increase by almost 50% over the next 20 years, and 90% of the increase will come from the two most populous nations – China and India.

After the Fukushima disaster, Germany decided it wanted to quit nuclear by 2022 (ironically, Japan is going back to nuclear), and its green (wind, solar, hydro, biomass) energy generation went from less than 20% in 2011 to 44% in 2018. Its electricity price to households went up 24% over that period. Germans pay almost three times as much for electricity as Americans, while their CO2 emissions have not budged.

This happened for two reasons. First, wind turbines and solar replaced nuclear, which produced little CO2; second, Mother Nature is not predictable, even in Germany. On a cloudy day, solar output drops, and if the wind doesn’t blow, wind turbines don’t turn, and “peak” coal- or gas-fed power plants have to come online to fill in the gap. Peak plants are smaller, usually less efficient power plants that produce more CO2 per kilowatt hour than larger plants.

Despite the overall jump in Germany’s electricity costs, electricity prices in Germany turned negative (yes, negative) on more than 100 occasions in 2017; customers were paid to use electricity. It was breezy and sunny, so renewables produced a lot of energy, but there was no demand for it, so it was cheaper to pay consumers to use power than to disconnect wind turbines and solar panels from the grid.

Ten or 15 years ago, the thinking was that the solution to our energy problems would be the price of solar dropping. It has, falling by 50% to 80% over the past decade, and this trend will likely continue. The problem is that when we need heat or AC in our houses, we expect it to be available instantly, even on cloudy days. Instead of peak plants, we need peak batteries; on very sunny days, we’d store extra electricity in battery packs.

Interestingly, Tesla provided a 100-megawatt battery system to the state of South Australia for $66 million to help it deal with a massive blackout in February 2017.  It is unclear if Tesla actually made money on this transaction. However, it was reported that the battery saved the state $17 million in six months by allowing it to sell extra power to the grid.

Recently, Tesla announced the Megapack (a battery pack that looks like a shipping container), a large, utility-grade solution that has 60% greater power density than the battery used in South Australia. Thus, the billions poured into battery technology for EVs are having the unintended consequence of lowering the cost of peak batteries.

There is another inconvenient truth. Batteries and renewables in general are made of earthly materials that have to be mined, transported, refined, and turned into finished products, which are often “browner” (or whatever color is the opposite of green) than nongreen alternatives. The Tesla battery weighs about a thousand pounds and requires 500,000 pounds of raw material to be unearthed to build it.

And there is yet another issue with lithium ion batteries. They require cobalt, a mineral that is found in the Earth’s crust. But 50% to 70% (depending on the source) of cobalt reserves are in the Democratic Republic of the Congo (DRC), a politically unstable country that doesn’t shy from inhuman labor practices and child labor. Tesla and Panasonic have been reducing the amount of cobalt used in their batteries; it has declined by 60% – the Model 3 battery contains only 2.8% cobalt (the Volkswagen ID.3’s battery contains 12% to 14% cobalt). Tesla and Panasonic recently announced that they are working on a cobalt-free battery; they’ll substitute silicon for cobalt.

I have to confess that after researching this topic I now understand why Jeff Bezos believes we should mine asteroids for minerals and is committed to spending all his Amazon wealth on Blue Origin, his aerospace venture. And Elon Musk’s SpaceX seeks to relocate earthlings to Mars. No matter what path we take in generating electricity, we will still produce plenty of carbon dioxide in the process.  However, electric cars provide options (some are better than others), where gasoline cars are making chose between oil or oil. 

This is just one out of 11 parts of my analysis of Tesla, Elon Musk, and the EV revolution.

You can get complete analysis as an email series, PDFEPUB or Kindle ebook here or email at tesla-article@contrarianedge.com

The post Are Electric Cars Good for the Environment? appeared first on Vitaliy Katsenelson Contrarian Edge.



from Vitaliy Katsenelson Contrarian Edge https://ift.tt/2Ly46LH
via IFTTT

The Current War is basically Amadeus for electricity

https://ift.tt/2W5w8oH

Photo: Dean Rogers / 101 Studios

It’s unusual for a film to arrive in theaters labeled “Director’s Cut,” but it’s happening with The Current War, a historical film about the tech face-off between irascible inventor Thomas Edison (Benedict Cumberbatch) and genteel industrialist George Westinghouse (Michael Shannon). The film’s themes are integrity, the damaging effects of powerful men, and the importance of savvy branding — and those same themes have played out in the film’s release just as much as they’ve turned up on-screen.

The “Director’s Cut” label ensures that even the most casual filmgoers will have a sense that something strange is at play with The Current War. The drama about the early days of electricity is hitting theaters more than two years after it premiered to tepid reviews at the Toronto International Film Festival. The version that played there had reportedly been heavily reworked by producer Harvey Weinstein. After Weinstein’s sexual abuse disgrace and The Weinstein Company’s collapse left the project in limbo, director Alfonso Gomez-Rejon (Me and Earl and the Dying Girl) got the unprecedented chance to bring the film back to his original vision. His faster-paced “Director’s Cut” is 10 minutes shorter, includes five new scenes, and features a brand-new score, not to mention a ready-made narrative about a filmmaker overcoming the odds.

In its new form, The Current War is neither a game-changer nor a dud. It’s a solid period piece that strives to be something more, and occasionally achieves it. The Current War is a bit of a dad movie, one that will likely pair nicely with the upcoming Ford v Ferrari. But it’s helped along by fascinating source material and a few moments of poignancy that land with welcome heft. Particularly for history buffs or tech enthusiasts, The Current War is worth sitting through the bad stuff to get to the good.

Set between 1880 and 1893, The Current War centers on the rivalry between Edison — at the film’s opening, a major celebrity for developing light bulbs — and Westinghouse, a mogul with his own ideas about how to bring artificial light to America. The battle focuses on which electrical system the country will adopt — Edison’s direct current (DC), or Westinghouse’s alternating current (AC). As the two men race to win the bid to light up the Chicago World’s Fair, The Current War also intermittently checks in on soft-spoken Serbian immigrant Nikola Tesla (Nicholas Hoult), a futurist who’s too busy dreaming up groundbreaking designs to turn his mental acumen into tangible business success.

In spite of the 19th-century setting, Gomez-Rejon clearly doesn’t want The Current War to play as a standard period piece. The film barrels ahead at an absolutely frenetic pace, occasionally calling to mind that much-mocked Bohemian Rhapsody scene where the camera can’t settle on any given actor for more than a second. Working with Park Chan-wook’s go-to cinematographer Chung-hoon Chung, Gomez-Rejon is desperate to ensure The Current War never bores the audience. Not since the original Thor has a film featured so many unmotivated Dutch angles. And then there are the dramatic circular pans, intense close-ups, and fisheye-lens shots, all deployed seemingly at random. The loopy aesthetic occasionally calls to mind Yorgos Lanthimos’ The Favourite, another recent period piece with modern flair. But The Current War lacks that film’s purposefulness and cohesion.

Photo: Dean Rogers / 101 Studios

Still, while Gomez-Rejon’s visual aesthetic is a bit of a gimmick, it’s sometimes successful. The Current War is attention-grabbing, even if viewers are just paying attention because they’re trying to puzzle out the bizarre visual choices. Working from a script by playwright Michael Mitnick, Gomez-Rejon effectively turns a complex 13-year rivalry into a streamlined narrative. On-screen text hovers over major players to introduce them and clarify their relationships. Red and yellow lightbulbs marking out territories on a giant US map provide an easy way to track the footholds Edison and Westinghouse are gaining across the country. The film leaps ahead years at a time without losing track of its central story. While The Current War sometimes feels like a Wikipedia page brought to life, it’s at least an appreciably coherent one.

As Gomez-Rejon sees it, the war between Edison and Westinghouse isn’t just a personal or professional rivalry, it’s a battle for the future of the nation. The more cost-effective AC system would eventually allow the entire country to access electricity. The DC system will leave it as a privilege for the rich — which Edison doesn’t mind, so long as he gets the glory of winning the race.

Photo: Dean Rogers / 101 Studios

As Edison uses animal-killing stunts to manipulate the public into thinking AC is unsafe, Tesla privately makes a timely socialist argument for the importance of treating technological advancements as public goods. In another relevant connection to the present, Gomez-Rejon emphasizes that groundbreaking technology often brings problematic side effects along with improvements. He juxtaposes the uplifting World’s Fair storyline with a fascinating tangent about the creation of the electric chair, which played its own dark, dramatic role in the Edison / Westinghouse rivalry.

Unfortunately, The Current War too often fails to bring a human heart to its intellectually stimulating ideas. The film doesn’t translate its palpable reverence for Tesla into a dramatically compelling through line. Hoult can’t escape the shadow of David Bowie’s memorable Tesla portrayal in The Prestige, either, although one glorious scene does let Hoult emerge as the comedic character actor he was born to be.

Cumberbatch, meanwhile, is hampered by both his terrible American accent and the way his brilliant but bullish Edison feels like a carbon copy of so many of the tortured-genius roles he’s played before. The darker side of Edison’s personality never entirely meshes with the sentimental attempts to humanize him, primarily his underdeveloped friendship with loyal personal secretary Samuel Insull. (He’s played in an endearing way by current Spider-Man Tom Holland, whose Marvel Cinematic Universe reunion with Cumberbatch is occasionally a little distracting.)

Photo: Dean Rogers / 101 Studios

Playing the least familiar figure in the film, Shannon winds up giving the most compelling performance. The Current War is the latest in a long line of projects that prove he can bring across quiet decency as well as (or better than) over-the-top villainy. There’s an intriguing thread about Westinghouse’s relationship with his forward-thinking wife Marguerite (a wonderful Katherine Waterston), although as with most of the subplots, The Current War doesn’t take much time to dig into it. Still, in a movie that often plays like Amadeus for electricity, Shannon makes an effective Salieri to Cumberbatch’s Mozart. The Current War also bears more than a passing resemblance to Aaron Sorkin’s 2007 Broadway play The Farnsworth Invention, which traces a similar rivalry between two men racing to invent television.

Though The Current War centers on three great men of history, Gomez-Rejon tries in some ways to dismantle the traditional line of thinking about innovators. He raises pointed questions about the clash between the collaborative nature of scientific innovation and America’s habit of framing its technological leaps as the result of heroic pioneers single-handedly changing history. As The Current War sees it, Edison’s publicity skills earned him a prominent place in the history books, as much as his engineering prowess. Tesla and Westinghouse didn’t seek personal glory in the same way, which makes it harder to know how to celebrate their work, other than to retroactively turn them into great men of history as well. The Current War kind of does that, too.

In trying to have it both ways, Gomez-Rejon can’t quite recontextualize history the way he wants to. Yet even if The Current War is soft around the edges and a little soggy in the middle, there’s still something appreciably sparky at its core. As overstuffed and frenetic as the film is, in its best moments, The Current War manages to make an everyday utility seem just as magical as it did 120 years ago.



from The Verge https://ift.tt/1jLudMg
via IFTTT

What We're Reading ~ 10/23/19

https://ift.tt/2p94mvd


Blackstone CEO's new book: What It Takes [Stephen Schwarzman]

Taking a look at Domino's [Timberwolf Equity Research]

Quick new interview with Peter Lynch [Fidelity]

Denise Chisholm on historical sector valuations [Barrons]

With DataXu buy, Roku unveils big ad ambitions [Digiday]

At Costco, everything resonates with the consumer [Retail Dive]

Disney, IP, and returns to marginal affinity [Matthew Ball]

Apple Pay and the future of mobile payments [PYMNTS]

Technical overview of Elastic (ESTC) [Motley Fool]

Inside Apple's long, bumpy road to Hollywood [Hollywood Reporter]

20 countries that will face population declines [Business Insider]




from Market Folly https://ift.tt/NLWANl
via IFTTT

Programming Algorithms: A Crash Course in Lisp

https://ift.tt/2Oq88fy

The introductory post for this book, unexpectedly, received quite a lot of attention, which is nice since it prompted some questions, and one of them I planned to address in this chapter.

I expect that there will be two main audiences, for this book:

  • people who’d like to advance in algorithms and writing efficient programs — the major group
  • lispers, either accomplished or aspiring who also happen to be interested in algorithms

This introductory chapter is, primarily, for the first group. After reading it, the rest of the book’s Lisp code should become understandable to you. Besides, you’ll know the basics to run Lisp and experiment with it if will you so desire.

For the lispers, I have one comment and one remark. You might be interested to read this part just to understand my approach of utilizing the language throughout the book. Also, you’ll find my stance regarding the question that was voiced several times in the comments: whether it’s justified to use some 3rd-party extensions and to what extent or should the author vigilantly stick to only the tools provided by the standard.

The Core of Lisp

To effortlessly understand Lisp, you’ll have to forget, for a moment, any concepts of how programming languages should work that you might have acquired from your prior experience in coding. Lisp is simpler; and when people bring their Java, C or Python approaches to programming with it, first of all, the results are suboptimal in terms of code quality (simplicity, clarity, and beauty), and, what’s more important, there’s much less satisfaction from the process, not to mention very few insights and little new knowledge gained.

It is much easier to explain Lisp if we begin from a blank slate. In essence, all there is to it is just an evaluation rule: Lisp programs consist of forms that are evaluated by the compiler. There are 3+2 ways how that can happen:

  • self-evaluation: all literal constants (like 1, "hello", etc.) are evaluated to themselves. These literal objects can be either built-in primitive types (1) or data structures ("hello")
  • symbol evaluation: separate symbols are evaluated as names of variables, functions, types or classes depending on the context. The default is variable evaluation, i.e. if we encounter a symbol foo the compiler will substitute in its place the current value associated with this variable (more on this a little bit later)
  • expression evaluation: compound expressions are formed by grouping symbols and literal objects with parenthesis. The following form (oper 1 foo) is considered a “functional” expression: the operator name is situated in the first position (head), and its arguments, if any, in the subsequent positions (rest). There are 3 ways to evaluate a functional expression:
    • there are 25 special operators that are defined in lower-level code and may be considered something like axioms of the language: they are pre-defined, always present, and immutable. Those are the building blocks, on top of which all else is constructed, and they include the sequential block operator, the conditional expression if, and the unconditional jump go, to name a few. If oper is the name of a special operator, the low-level code for this operator that deals with the arguments in its own unique way is executed
    • there’s also ordinary function evaluation: if oper is a function name, first, all the arguments are evaluated with the same evaluation rule, and then the function is called with the obtained values
    • finally, there’s macro evaluation. Macros provide a way to change the evaluation rule for a particular form. If oper names a macro, its code is substituted instead of our expression and then evaluated. Macros are a major topic in Lisp, and they are used to build a large part of the language, as well as provide an accessible way, for the users, to extend it. However, they are orthogonal to the subject of this book and won’t be discussed in further detail here. You can delve deeper into macros in such books as On Lisp or Let Over Lambda

It’s important to note that, in Lisp, there’s no distinction between statements and expressions, no special keywords, no operator precedence rules, and other similar arbitrary stuff you can stumble upon in other languages. Everything is uniform; everything is an expression in a sense that it will be evaluated and return some value.

A Code Example

To sum up, let’s consider an example of the evaluation of a Lisp form. The following one implements the famous binary search algorithm (that we’ll discuss in more detail in one of the following chapters):

(when (> (length vec) 0) (let ((beg 0) (end (length vec))) (do () ((= beg end)) (let ((mid (floor (+ beg end) 2))) (if (> (? vec mid) val) (:= beg (1+ mid)) (:= end mid)))) (values beg (? vec beg) (= (? vec beg) val)))) 

It is a compound form. In it, the so-called top-level form is when, which is a macro for a one-clause conditional expression: an if with only the true-branch. First, it evaluates the expression (> (length vec) 0), which is an ordinary function for a logical operator > applied to two args: the result of obtaining the length of the contents of the variable vec and a constant 0. If the evaluation returns true, i.e. the length of vec is greater than 0, the rest of the form is evaluated in the same manner. The result of the evaluation, if nothing exceptional happens, is either false (which is called nil, in Lisp) or 3 values returned from the last form (values ...). ? is the generic access operator, which abstracts over different ways to query data structures by key. In this case, it retrieves the item from vec at the index of the second argument. Below we’ll talk about other operators shown here.

But first I need to say a few words abut RUTILS. It is a 3rd-party library that provides a number of extensions to the standard Lisp syntax and its basic operators. The reason for its existence is that Lisp standard is not going to change ever, and, as eveything in this world, it has its flaws. Besides, our understanding of what’s elegant and efficient code evolves over time. The great advantage of the Lisp standard, however, which counteracts the issue of its immutability, is that its authors had put into it multiple ways to modify and evolve the language at almost all levels starting from even the basic syntax. And this addresses our ultimate need, after all: we’re not so interested in changing the standard as we’re in changing the language. So, RUTILS is one of the ways of evolving Lisp and its purpose is to make programming in it more accessible without compromising the principles of the language. So, in this book, I will use some basic extensions from RUTILS and will explain them as needed. Surely, using 3rd-party tools is the question of preference and taste and might not be approved by some of the Lisp old-times, but no worries, in your code, you’ll be able to easily swap them for your favorite alternatives.

The REPL

Lisp programs are supposed to be run not only in a one-off fashion of simple scripts, but also as live systems that operate over long periods of time experiencing change not only of their data but also code. This general way of interaction with a program is called Read-Eval-Print-Loop (REPL), which literally means that the Lisp compiler reads a form, evaluates it with the aforementioned rule, prints the results back to the user, and loops over.

REPL is the default way to interact with a Lisp program, and it is very similar to the Unix shell. When you run your Lisp (for example, by entering sbcl at the shell) you’ll drop into the REPL. We’ll preceede all REPL-based code interactions in the book with a REPL prompt (CL-USER> or similar). Here’s an example one:

CL-USER> (print "Hello world") "Hello world" "Hello world" 

A curious reader may be asking why "Hello world" is printed twice. It’s a proof that everything is an expression in Lisp. :) The print “statement”, unlike in most other languages, not only prints its argument to the console (or other output stream), but also returns it as is. This comes very handy when debugging, as you can wrap almost any form in a print not changing the flow of the program.

Obviously, if the interaction is not necessary, just the read-eval part may remain. But, what’s more important, Lisp provides a way to customize every stage of the process:

  • at the read stage special syntax (“syntax sugar”) may be introduced via a mechanism called reader macros
  • ordinary macros are a way to customize the eval stage
  • the print stage is conceptually the simplest one, and there’s also a standard way to customize object printing via the Common Lisp Object System’s (CLOS) print-object function
  • and the loop stage can be replaced by any desired program logic

Basic Expressions

The structural programming paradigm states that all programs can be expressed in terms of 3 basic constructs: sequential execution, branching, and looping. Let’s see how these operators are expressed in Lisp.

Sequential Execution

The simplest program flow is sequential execution. In all imperative languages, it is what is assumed to happen if you put several forms in a row and evaluate the resulting code block. Like this:

CL-USER> (print "hello") (+ 2 2) "hello" 4 

The value returned by the last expression is dimmed the value of the whole sequence.

Here, the REPL-interaction forms an implicit unit of sequential code. However, there are many cases when we need to explicitly delimit such units. This can be done with the block operator:

CL-USER> (block test (print "hello") (+ 2 2)) "hello" 4 

Such block has a name (in this example: test). This allows to prematurely end its execution by using an operator return-from:

CL-USER> (block test (return-from test 0) (print "hello") (+ 2 2)) 0 

A shorthand return is used to exit from blocks with a nil name (which are implicit in most of the looping constructs we’ll see further):

CL-USER> (block nil (return 0) (print "hello") (+ 2 2)) 0 

Finally, if we don’t even plan to ever prematurely return from a block, we can use the progn operator that doesn’t require a name:

CL-USER> (progn (print "hello") (+ 2 2)) "hello" 4 

Branching

Conditional expressions calculate the value of their first form and, depending on it, execute one of several alternative code paths. The basic conditional expression is if:

CL-USER> (if nil (print "hello") (print "world")) "world" "world" 

As we’ve seen, nil is used to represent logical falsity, in Lisp. All other values are considered logically true, including the symbol T or t which directly has the meaning of truth.

And when we need to do several things at once, in one of the conditional branches, it’s one of the cases when we need to use progn or block:

CL-USER> (if (+ 2 2) (progn (print "hello") 4) (print "world")) "world" 4 

However, often we don’t need both branches of the expressions, i.e. we don’t care what will happen if our condition doesn’t hold (or holds). This is such a common case that there are special expressions for it in Lisp — when and unless:

CL-USER> (when (+ 2 2) (print "hello") 4) "world" 4 CL-USER> (unless (+ 2 2) (print "hello") 4) NIL 

As you see, it’s also handy because you don’t have to explicitly wrap the sequential forms in a progn.

One other standard conditional expression is cond, which is used when we want to evaluate several conditions in a row:

CL-USER> (cond ((typep 4 'string) (print "hello")) ((> 4 2) (print "world") nil) (t (print "can't get here"))) "world" NIL 

The t case is a catch-all that will trigger if none of the previous conditions worked (as its condition is always true). The above code is equivalent to the following:

(if (typep 4 'string) (print "hello") (if (> 4 2) (progn (print "world") nil) (print "can't get here"))) 

There are many more conditional expressions in Lisp, and it’s very easy to define your own with macros (it’s actually, how when, unless, and cond are defined), and when there arises a need to use a special one, we’ll discuss its implementation.

Looping

Like with branching, Lisp has a rich set of looping constructs, and it’s also easy to define new ones when necessary. This approach is different from the mainstream languages, that usually have a small number of such statements and, sometimes, provide an extension mechanism via polymorphism. And it’s even considered to be a virtue justified by the idea that it’s less confusing for the beginners. It makes sense to a degree. Still, in Lisp, both generic and custom approaches manage to coexist and complement each other. Yet, the tradition of defining custom control constructs is very strong. Why? One justification for this is the parallel to human languages: indeed, when and unless, as well as dotimes and loop are either directly words from the human language or are derived from natural language expressions. Our mother tongues are not so primitive and dry. The other reason is because you can™. I.e. it’s so much easier to define custom syntactic extensions in Lisp than in other languages that sometimes it’s just impossible to resist. :) And in many use cases they make the code much more simple and clear.

Anyway, for a complete beginner, actually, you have to know the same number of iteration constructs as in any other language. The simplest one is dotimes that iterates the counter variable a given number of times (from 0 to (- times 1)) and executes the body on each iteration. It is analogous to for (int i = 0; i < times; i++) loops found in C-like languages.

CL-USER> (dotimes (i 3) (print i)) 0 1 2 NIL 

The return value is nil by default, although it may be specified in the loop header.

The most versatile (and low-level) looping construct, on the other hand, is do:

CL-USER> (do ((i 0 (1+ i)) (prompt (read-line) (read-line))) ((> i 1) i) (print (pair i prompt)) (terpri)) foo (0 "foo") bar (1 "bar") 2 

do iterates a number of variables (zero or more) that are defined in the first part (here, i and prompt) until the termination condition in the second part is satisfied (here, (> i 1)), and as with dotimes (and other do-style macros) executes its body — rest of the forms (here, print and terpri, which is a shorthand for printing a newline). read-line reads from standard input until newline is encountered and 1+ returns the current value of i increased by 1.

All do-style macros (and there’s quite a number of them, both built-in and provided from external libraries: dolist, dotree, do-register-groups, dolines etc.) have an optional return value. In do it follows the termination condition, here — just return the final value of i.

Besides do-style iteration, there’s also a substantially different beast in CL ecosystem — the infamous loop macro. It is very versatile, although somewhat unlispy in terms of syntax and with a few surprising behaviors. But elaborating on it is beyond the scope of this book, especially since there’s an excellent introduction to loop in Peter Seibel’s "LOOP for Black Belts".

Many languages provide a generic looping construct that is able to iterate an arbitrary sequence, a generator and other similar-behaving things — usually, some variant of foreach. We’ll return to such constructs after speaking about sequences in more detail.

And there’s also an alternative iteration philosophy: the functional one, which is based on higher-order functions (map, reduce and similar) — we’ll cover it in more detail in the following chapters, also.

Procedures and Variables

We have covered the 3 pillars of structural programming, but one essential, in fact, the most essential, construct still remains — variables and procedures.

What if I told you that you can perform the same computation many times, but changing some parameters… OK, OK, pathetic joke. So, procedures are the simplest way to reuse computations, and procedures accept arguments, which allows to pass values into their bodies. A procedure, in Lisp, is called lambda. You can define one like this: (lambda (x y) (+ x y)). When used, such procedure — also often called a function, although it’s quite different from what we consider a mathematical function — and, in this case, it’s called an anonymous function as it doesn’t have any name — will produce the sum of its inputs:

CL-USER> ((lambda (x y) (+ x y)) 2 2) 4 

It is quite cumbersome to refer to procedures by their full code signature, and an obvious solution is to assign names to them. A common way to do that in Lisp is via the defun macro:

CL-USER> (defun add2 (x y) (+ x y)) ADD2 CL-USER> (add2 2 2) 4 

The arguments of a procedure are examples of variables. Variables are used to name memory cells whose contents are used more than once and may be changed in the process. They serve different purposes:

  • to pass data into procedures
  • as temporary placeholders for some varying data in code blocks (like loop counters)
  • as a way to store computation results for further reuse
  • to define program configuration parameters (like the OS environment variables, which can also be thought of as arguments to the main function of our program)
  • to refer to global objects that should be accessible from anywhere in the program (like *standard-output* stream)
  • and more

Can we live without variables? Theoretically, well, maybe. At least, there’s a so called point-free style of programming that strongly discourages the use of variables. But, as they say, don’t try this at home (at least, until you know perfectly well what you’re doing :) Can we replace variables with constants, or single-assignment variables, i.e. variables that can’t change over time? Such approach is promoted by the so called purely functional languages. To a certain degree, yes. But, from the point of view of algorithms development, it makes life a lot harder by complicating many optimizations if not totally outruling them.

So, how to define variables in Lisp? You’ve already seen some of the variants: procedural arguments and let-bindings. Such variables are called local or lexical, in Lisp parlance. That’s because they are only accessible locally throughout the execution of the code block, in which they are defined. let is a general way to introduce such local variables, which is lambda in disguise (a thin layer of syntax sugar over it):

CL-USER> (let ((x 2)) (+ x x)) 4 CL-USER> ((lambda (x) (+ x x)) 2) 4 

While with lambda you can create a procedure in one place, possibly, assign it to a variable (that’s what, in essence, defun does), and then apply many times in various places, with let you define a procedure and immediately call it, leaving no way to store it and re-apply again afterwards. That’s even more anonymous than an anonymous function! Also, it requires no overhead, from the compiler. But the mechanism is the same.

Creating variables via let is called binding, because they are immediately assigned (bound with) values. It is possible to bind several variables at once:

CL-USER> (let ((x 2) (y 2)) (+ x y)) 4 

However, often we want to define a row of variables with next ones using the previous ones’ values. It is cumbersome to do with let, because you need nesting (as procedural arguments are assigned independently):

(let ((len (length list))) (let ((mid (floor len 2))) (let ((left-part (subseq list 0 mid)) (right-part (subseq list mid))) ...))) 

To simplify this use case, there’s let*:

(let* ((len (length list)) (mid (floor len 2)) (left-part (subseq list 0 mid)) (right-part (subseq list mid))) ...) 

However, there are many other ways to define variables: bind multiple values at once; perform the so called “destructuring” binding when the contents of a data structure (usually, a list) are assigned to several variables, first element to the first variable, second to the second, and so on; access the slots of a certain structure etc. For such use cases, there’s with binding from RUTILS, which works like let* with extra powers. Here’s a very simple example:

(with ((len (length list)) (mid rem (floor len 2)) ;; this group produces a list of 2 sublists ;; that are bound to left-part and right-part ;; and ; character starts a comment in lisp ((left-part right-part) (group mid list))) ... 

In the code throughout this book, you’ll only see these two binding constructs: let for trivial and parallel bindings and with for all the rest.

As we said, variables may not only be defined, or they’d be called “constants”, instead, but also modified. To alter the variable’s value we’ll use := from RUTILS (it is an abbreviation of the standard psetf macro):

CL-USER> (let ((x 2)) (print (+ x x)) (:= x 4) (+ x x)) 4 8 

Modification, generally, is a dangerous construct as it can create unexpected action-at-a-distance effects, when changing the value of a variable in one place of the code effects the execution of a different part that uses the same variable. This, however, can’t happen with lexical variables: each let creates its own scope that shields the previous values from modification (just like passing arguments to a procedure call and modifying them within the call doesn’t alter those values, in the calling code):

CL-USER> (let ((x 2)) (print (+ x x)) (let ((x 4)) (print (+ x x))) (print (+ x x))) 4 8 4 

Obviously, when you have two lets in different places using the same variable name they don’t affect each other and these two variables are, actually, totally distinct.

Yet, sometimes it is useful to modify a variable in one place and see the effect in another. The variables, which have such behavior, are called global or dynamic (and also special, in Lisp jargon). They have several important purposes. One is defining important configuration parameters that need to be accessible anywhere. The other is referencing general-purpose singleton objects like the standard streams or the state of the random number generator. Yet another is pointing to some context that can be altered in certain places subject to the needs of a particular procedure (for instance, the *package* global variable determines in what package we operate — CL-USER in all previous examples). More advanced uses for global variables also exist. The common way to define a global variable is with defparameter, which specifies its initial value:

(defparameter *connection* nil "A default connection object.") ; this is a docstring describing the variable 

Global variables, in Lisp, usually have so-called “earmuffs” around their names to remind the user of what they are dealing with. Due to their action-at-a-distance feature, it is not the safest programming language feature, and even a “global variables considered harmful” mantra exists. Lisp is, however, not one of those squeamish languages, and it finds many uses for special variables. By the way, they are called “special” due to a special feature, which greatly broadens the possibilities for their sane usage: if bound in let they act as lexical variables, i.e. the previous value is preserved and restored upon leaving the body of a let:

CL-USER> (defparameter *temp* 1) *TEMP* CL-USER> (print *temp*) 1 CL-USER> (progn (let ((*temp* 2)) (print *temp*) (:= *temp* 4) (print *temp*)) *temp*) 2 4 1 

Procedures in Lisp are first-class objects. This means the one you can assign to a variable, as well as inspect and redefine at run-time, and, consequently, do many other useful things with. The RUTILS function call[1] will call a procedure passed to it as an argument:

CL-USER> (call 'add2 2 2) 4 CL-USER> (let ((add2 (lambda (x y) (+ x y)))) (call add2 2 2)) 4 

In fact, defining a function with defun also creates a global variable, although in the function namespace. Functions, types, classes — all of these objects are usually defined as global. Though, for functions there’s a way to define them locally with flet:

CL-USER> (foo 1) ;; ERROR: The function COMMON-LISP-USER::FOO is undefined. CL-USER> (flet ((foo (x) (1+ x))) (foo 1)) 2 CL-USER> (foo 1) ;; ERROR: The function COMMON-LISP-USER::FOO is undefined. 

Comments

Finally, there’s one more syntax we need to know: how to put comments in the code. Only loosers don’t comment their code, and comments will be used extensively, throughout this book, to explain some parts of the code examples, inside of them. Comments, in Lisp, start with a ; character and end at the end of a line. So, the following snippet is a comment: ; this is a comment. There’s also a common style of commenting, when short comments that follow the current line of code start with a single ;, longer comments for a certain code block precede it, occupy the whole line or a number of lines and start with ;;, comments for code section that include several Lisp top-level forms (global definitions) start with ;;; and also occupy whole lines. Besides, each global definition can have a special comment-like string, called the “docstring”, that is intended to describe its purpose and usage, and that can be queried programmatically. To put it all together, this is how different comments may look like:

;;; Some code section (defun this () "This has a curious docstring." ...) (defun that () ... ;; this is an interesting block don't you find? (block interesting (print "hello"))) ; it prints hello 

Getting Started

I strongly encourage you to play around with the code presented in the following chapters of the book. Try to improve it, find issues with it, and come up with fixes, measure and trace everything. This will not only help you master some Lisp, but also understand much deeper the descriptions of the discussed algorithms and data structures, their pitfalls and corner cases. Doing that is, in fact, quite easy. All you need is install some Lisp (preferrably, SBCL or CCL), add Quicklisp, and, with its help, RUTILS.

As I said above, the usual way to work with Lisp is interacting with its REPL. Running the REPL is fairly straightforward. On my Mint Linux I’d run the following commands:

$ apt-get install sbcl rlwrap ... $ rlwrap sbcl ... * (print "hello world") "hello world" "hello world" * 

* is the Lisp raw prompt. It’s, basically, the same as CL-USER> prompt you’ll see in SLIME. You can also run a Lisp script file: sbcl --script hello.lisp. If it contains just a single (print "hello world") line we’ll see the “hello world” phrase printed to the console.

This is a working, but not the most convenient setup. A much more advanced environment is SLIME that works inside Emacs (a similar project for vim is called SLIMV). There exists a number of other solutions: some Lisp implementations provide and IDE, some IDEs and editors provide integration.

After getting into the REPL, you’ll have to issue the following commands:

* (ql:quickload :rutilsx) * (use-package :rutilsx) * (named-readtables:in-readtable rutilsx-readtable) 

Well, that’s enough Lisp you’ll need to know, to start. We’ll get acquainted with other Lisp concepts as they will become needed for the next chapters of this book. Yet, you’re all set to read and write Lisp programs. They may seem unfamiliar, at first, but as you overcome the initial bump and get used to their paranthesised prefix surface syntax, I promise that you’ll be able to recognize and appreciate their clarity and conciseness.

So, as they say in Lisp land, happy hacking!


Footnotes:

[1] call is the RUTILS abbreviation of the standard funcall. It was surely fun to be able to call a function from a variable back in the 60’s, but now it has become so much more common that there’s no need for the prefix ;)



from Hacker News https://ift.tt/YV9WJO
via IFTTT

New trailer for HBO’s Watchmen includes a nod to Rorschach’s most famous line

https://ift.tt/2O5ByQ9

HBO

HBO is looking for its next big series as it bids farewell to Game of Thrones and Big Little Lies, and Damon Lindelof’s Watchmen is shaping up to be that series.

A new trailer for Watchmen from San Diego Comic-Con has premiered, building on moments from the first trailer. Some of Watchmen’s most beloved vigilante heroes, like Rorschach, appear alongside a number of new costumed heroes going about their day. There are references to Dr. Manhattan, and a previous class of vigilante fighters that longtime Watchmen fans will recognize. We even get a glimpse of Rorschach saying his famous, “all the whores and politicians will look up and shout ‘Save us,’ and I’ll look down and whisper ‘No,’” line. Or at least a nod to it.

The trailer is also another reminder that Lindelof’s Watchmen isn’t a direct adaptation of Alan Moore’s classic graphic novel. Moore’s 1986 title centered on a group of superhero vigilantes operating in an alternate United States during the Cold War. It is widely viewed as one of the most important graphic novels of all time, and is even taught at colleges and universities around the world. Lindelof’s version will tell a different story, but set within Moore’s world. He’ll use Watchmen character Rorschach as a central point, which should appeal to comic book readers.

The trailer is also another reminder that Lindelof’s Watchmen isn’t a direct adaptation of Alan Moore’s classic graphic novel.

Lindelof, who also created Lost and The Leftovers, has spoken publicly about fan concern that Watchmen will do injustice to Moore’s work. That’s partially why they decided to go in a different direction. This won’t be a straight adaptation like Zack Snyder’s 2009 feature-film version, which divided critics but has a strong fan base.

“We have no desire to ‘adapt’ the twelve issues Mr. Moore and Mr. Gibbons created thirty years ago,” Lindelof wrote on Instagram when the show was in early production. “Those issues are sacred ground and they will not be retread nor recreated nor reproduced nor rebooted.” He says the show isn’t a sequel, but it does seem to take place in the world where the events of Watchmen happened:

Those original twelve issues are our Old Testament. When the New Testament came along it did not erase what came before it. Creation. The Garden of Eden. Abraham and Isaac. The Flood. It all happened. And so it will be with Watchmen. The Comedian died. Dan and Laurie fell in love. Ozymandias saved the world and Dr. Manhattan left it just after blowing Rorschach to pieces in the bitter cold of Antarctica.

Watchmen premieres this October.



from The Verge https://ift.tt/1jLudMg
via IFTTT

Watch the first trailer for Star Trek: Picard

https://ift.tt/2Z4vvw4

Image: CBS

At San Diego Comic-Con today, CBS showed off a full trailer for its upcoming series, Star Trek: Picard, featuring Patrick Stewart’s older version of the character as he’s drawn out of retirement. The series will debut in early 2020 on CBS All Access.

The show will take place more than 20 years after the events of Star Trek: The Next Generation which aired between 1987 and 1994, and its film sequels, Generations (1994), First Contact (1996), Insurrection (1998), and Nemesis (2002).

The trailer comes after CBS released a vague teaser featuring Picard enjoying his retirement from Star Fleet in his vineyard back in May. The trailer shows off an older Picard who has tried to put his past behind him, but he’s approached by a young woman in need of help. He goes to the Federation to try and help, and finds himself brought back into the fold. Along the way, there’s glimpses of some familiar characters and lines.

This new series will see several familiar faces will return to reprise their roles: Commander William Riker, played by Jonathan Frakes; Deanna Troi, played by Marina Sirtis; Data, played by Brent Spiner; a former Borg drone named Hugh, played by Jonathan Del Arco; and Seven of Nine from Star Trek: Voyager, played by Jeri Ryan. The series will also feature Alison Pill, Harry Treadaway, Isa Briones, Santiago Cabrera, and Michelle Hurd.

Earlier this week, Executive producer Alex Kurtzman and showrunner Michael Chabon told Entertainment Weekly that the series would see an “older, haunted” Picard returning to space, although that doesn’t necessarily mean that he’ll be returning to the ranks of Star Fleet. And unlike The Next Generation, the show will come in a more modern, serialized format.



from The Verge https://ift.tt/1jLudMg
via IFTTT