Julian Robertson: "There's A Bubble" And "It's The Federal Reserve's Fault"

http://ift.tt/2w604Hp

Call it the "bearish billionaire" curse. One month ago, MarketWatch penned "7 billionaires who are worried about a stock-market correction" which listed Carl Icahn, David Tepper, Howard Marks, George Soros, Jeff Gundlach, Warren Buffett and Eliot Singer as some of the world's wealthiest people who are losing sleep over the S&P trading at all time highs.

Today we can add another investing billionaire - one of the original hedgers, Tiger Management co-founder Julian Robertson - who spoke to CNBC's Kelly Evans, and stated in no uncertain words that the market is a bubble and that "it's the Federal Reserve's fault, and the Federal Reserves all over the world."

KELLY EVANS: ... I just wonder what you think generally of where we are in the equity market today, where we are in the stock market in terms of valuation.

 

JULIAN ROBERTSON: Well, we're very, very high -- have very high valuations in most stocks. The market, as a whole, is quite high on a historic basis. And I think that's due to the fact that interest rates are so low that there's no real competition for the money other than art and real estate. And so I think that's why the valuations are so high. I think when rates do start to go up and the bonds become more attractive to investors, it will affect the margins.

 

KELLY EVANS: Do you think they're dangerously high right now?

 

JULIAN ROBERTSON: Well, that's a -- you know, it's pretty -- they're high.

 

KELLY EVANS: Is it the Federal Reserve's fault or...

 

JULIAN ROBERTSON: Yes. It's the Federal Reserve's fault, and the Federal Reserves all over the world. I mean, in Germany, in order to buy a bond, until recently, you actually had to pay interest. And, you know, that's certainly going to discourage a lot of people from doing so. You know, you could get a fairly good dividend in Nestle, but if you wanted to buy a Nestle bond, you had to pay a fairly heavy penalty.

 

KELLY EVANS: Doesn't seem to make a lot of sense.

 

JULIAN ROBERTSON: It doesn't.

 

KELLY EVANS: That said, this morning, the Treasury Secretary Steve Mnuchin, who was here, said that he thought that Chair Janet Yellen is, in his words, "obviously quite talented," when asked about her potential to lead the Fed for another term.  Do you disagree with him?

 

JULIAN ROBERTSON: I think she's going to probably be asked to stay on for a while. But I think because there's been collusion all over the world, let's get interest rates down. And it's not just the United States, it's all over the world... I think we need interest rates to appreciate, to go up, and to be . . . Because I think we are creating a bubble.

 

KELLY EVANS: A bubble in the market? 

 

JULIAN ROBERTSON: Yes.

 

KELLY EVANS: In the stock market?

 

JULIAN ROBERTSON: Yes, ma'am.

Tiger Management's Julian Robertson: Market is very high on a historic basis from CNBC.

And yet, despite what he himself admits is a central bank-created "market bubble", Robertson is unable to stay away from his beloved tech names, as he admitted in the same interview:

KELLY EVANS: All right. Let me ask you about a couple of particular companies, just thinking about Apple, for example, which has a lot of cash overseas.  You were a holder of that going back a couple of years, and they have a big event today and are launching a bunch of new products. But has it just become too expensive for you guys, is it not -- or would you look at investing in Apple again?

 

JULIAN ROBERTSON: No, I think we should definitely look at Apple. Apple is not that expensive of a stock. There are a lot of disadvantages of being an old goat. One of the few advantages is the fact that we've seen all this a little bit before. And right now the Apples, the Facebooks, the Googles, those great growth companies are priced cheaper than they would have ever been in the '60 s , '70 s , and '80 s. And I don't think a lot of people realize that.

 

KELLY EVANS: Well, you're trimming your positions in Facebook, and Google, and you're not in Apple right now.

 

JULIAN ROBERTSON: Well, I don't think I've . . . I kind of trade Facebook and those things a little bit. And I consider myself kind of a long-term player of Facebook.

 

KELLY EVANS: Even though you think the markets overall are expensive, these emblematic tech same names you actually don't think are that expensive?

 

JULIAN ROBERTSON: Correct.

To summarize: central banks have blown a bubble, it will burst, so buy FANGs.



from Zero Hedge http://ift.tt/qouXdu
via IFTTT

Greene County Bancorp: Small Bank With Over 100% Upside

http://ift.tt/2wYrjCK

Background

This current market has been challenging for value investors. Great values are hard to come by, and value names have had poor relative performance as of late. It is for this reason that I was particularly excited when I stumbled across shares of Greene County Bancorp, Inc. (NASDAQ:GCBC), which is one of the most compelling risk/rewards I’ve invested in this year.

GCBC is a little thrift with a branch presence scattered in and around… you guessed it… Greene County, New York. GCBC’s flagship branch is located in Catskill, NY - a slow-to-no growth community with a population slightly below 50K. The remainder of GCBC’s branches are located in smaller communities in the surrounding area.

At the current market price, $24.75 at the time of this writing, common shares of GCBC have a significant margin of safety, and in this article, I’ll explore some of the reasons that support this thesis.

A brief aside on thrift conversions…

Before discussing the fundamentals of the business, I’d like to touch on a critical and unique aspect to GCBC that must be understood to accurately calculate the impressive margin of safety implied in the current share price. In 1998, shares of GCBC went public via a thrift conversion. Under the conversion, GCBC’s mutual holding company retained a majority interest in the shares outstanding (54.2%) with public shareholders owning a minority interest (45.8%). As has been noted by value investors for decades, the true economic interest of common shares should disregard the MHC interest, as the MHC interest is more akin to treasury stock e.g. the MHC waives any right to dividends. Most financial data providers are incorrect when reporting the market cap of thrift conversions, basing their calculations on the total shares including the MHC interest, which miscalculates the current market value of the common equity significantly.

The basis of this analysis will assume the correct share count of ~3.9 million, yielding an adjusted market capitalization of ~$96 million at the current share price vs. the reported market cap of ~$210 million.

To validate this conclusion, one only needs to look at a 10-Q – let’s take 9/30/2016 to demonstrate the issue:

For the quarter, GCBC paid ~$369K in dividends to shareholders. GCBC declared $0.095 per share. Therefore, with a little math, we conclude that ~3.9 million shareholders were paid. The reported number of shares outstanding includes the MHC interest (~8.5 million total shares). 3.9 ÷ 8.5 = 45.8%, reconciling with the reported MHC interest described above. So we can conclude that standard valuation screeners will overstate the share count, current market cap, and dividend yield of GCBC by a substantial margin

It’s easy to see why GCBC could so easily be off the radar, as most institutional investors would struggle to invest in such a small, thinly traded issue, and many retail investors erroneously see a valuation more than double its actual value at first glance.

The Deposit Franchise: Clear Path to FY 2018 Double-Digit Growth

GCBC’s performance over the last few years has been excellent, especially given the demographics of its markets are relatively unexciting. GCBC’s branch network is small. When I say “small”, I definitely mean it, every GCBC branch is listed below:

As of FY 2016, all but three branches had at least double digit deposit growth, and the flagship branches in Catskill performed particularly well over the period in adding over $100 million in deposits. Although 2017 branch-level data has not yet been released by the FDIC, based on the company’s reported financials, they’ve successfully carried their momentum into 2017 with FY 2017 y/y deposit growth of ~16%. Overall, this is a compelling deposit growth story that suggests deposit share capture in GCBC’s markets.

The more interesting question is whether above-industry growth is likely to continue in the future. The most impressive attribute of GCBC’s recent deposit growth is that it has come without the benefit of additional branch distribution. I’m somewhat skeptical GCBC will be able to maintain this organic growth rate across its existing branch network in the medium term. Nearly all of GCBC’s deposit growth is in Greene County, and as a market, this county has enjoyed strong overall deposit growth over the last few years. But there isn’t a clear demographic narrative to suggest that this banking market will be able to support this growth in the future. GCBC owns over a 60% deposit share in Greene County, so there is limited room to capture additional market share.

However, largely offsetting these negatives, GCBC announced a little tidbit last quarter: they are opening a new branch. The Bank of Greene County is entering the bustling community of Copake, NY. For a bank of GCBC’s size, a new branch actually can have a material impact on FY 2018 expectations. The new branch location was left vacant by KeyBank, and KeyBank’s branch network in the area can give us some clues as to the potential growth opportunity for the new GCBC branch opening. As of 2016, KeyBank had about $75mil in deposits at the location in question and a few nearby ATMs. After researching all of the nearby KeyBank branch locations, the next KeyBank full service branch is about 30 minutes away – there is clearly a nice opportunity here for GCBC to pick-up the stranded customer accounts. $75mil represents 9% growth above FY 2017 ending deposits. Between the new branch opening and a track record of execution across its existing branch network, GCBC is well positioned to hit double digit deposit growth in FY 2018.

On the loan growth side, historically, the ratio of loans to deposits has been quite low but trending in a positive direction. With GCBC’s rapid deposit growth over the last few years, I would have expected deposit growth to outpace loan growth, but Loans/Deposits have actually increased from 66% in FY 2013 to 74% in FY 2017, providing some lift to net interest margin over the same period. Assuming this relationship holds, like deposits, loans are well positioned to see double digit growth in FY 2018.

Worth noting, there doesn’t appear to be any evidence of price concessions on either the deposit or loan side. Pretty easy for any bank to rapidly grow deposits and loans by jacking up yields on deposit accounts or drastically reducing underwriting quality, but GCBC’s approach seems to be more methodical and protective of its long-term health.

GCBC’s Earnings Multiple: Highly Undervalued on a Trailing and Forward Basis

Putting the aforementioned pieces of the puzzle together, GCBC is very undervalued on both a trailing and forward earnings multiple basis. The earnings story in 2018 is favorable purely because of arithmetic. Because 2017 growth was strong, assuming no deposit or loan growth in 2018, I estimate earnings will grow by ~8% during the next fiscal year. This growth is purely attributable to the fact that 2017 growth has not yet fully manifested in earnings i.e. if ending loans stay flat y/y, average loans will still increase y/y. A flat growth scenario positions GCBC for ~$12 million in after-tax earnings against a market cap of $96 million, so we’re at about an 8x NTM P/E.

Of course, flat growth is rather too conservative in light of the deposit/loan growth story discussed above, so I’ve penciled out a proper base case. Before going further, I should highlight the operating efficiency of the business, which is very compelling. Over the last three years, GCBC has demonstrated significant operating leverage and has managed to improve an already stellar efficiency ratio. For FY 2015, 2016, and 2017, the bank’s efficiency ratio was 61.4%, 58.6%, and 54.2%. I haven’t seen any evidence to suggest that GCBC has a unique “special sauce” that allows them operate so profitably, but I’ll attribute the positive trend to a lean operating culture and the bank’s ability to drive significant organic deposit growth over the last few years. For purposes of sketching out a base case, a 50% efficiency ratio seems like a reasonable cap, as anything less than this suggests they are cutting to the bone from a cost standpoint.

For our base case, assuming a successful branch opening and even modest organic loan and deposit growth, I estimate FY 2018 earnings of ~$15 million or ~6x NTM P/E. Very cheap for a bank warranting a premium multiple. Both the “Flat” and “Base” 2018 outlook are detailed below:

Regulatory Capital: Healthy Balance Sheet with Significant Excess Capital

Not unusual for its peer group, GCBC currently and historically has maintained capital ratios well in excess of what is required under Basel III. This “excess” capital represents one of two potential positives: (1) the bank could easily support a significant buyback or one-time dividend, or (2) in the event of a downturn in the credit cycle, this excess capital would ensure its own safety and allow GCBC to opportunistically pick up assets at distressed prices. Given the Agencies keep a watchful eye over bank capital plans, we’re highly likely to fall into the latter vs. the former going forward. The bank’s current capital position is outlined below:

Highlighting a few salient ratios, GCBC’s Tier 1 Capital Ratio and Total Capital Ratio stand at 14.5% and 15.8% as of its 6/30/2017 call report. That represents a 6.0% and 5.3% cushion above regulatory minimums, respectively. This is a significant. If I were to translate the most limiting capital ratio (Total Capital %) into a $ cushion, it’s about ~$29 million or about 30% of the current market cap. In other words, GCBC could return $29 million to shareholders as a one-time special dividend and still be well capitalized from a regulatory perspective.

That said, GCBC would likely not be willing or permitted to run its capital down to minimum levels, so a $20 million return to shareholders is a more realistic view of the ‘excess’ capital opportunity. Layering in this attribute to the earnings analysis above, we see that GCBC is even more deeply undervalued than previously indicated. Assuming a $20 million reduction in market cap, GCBC’s LTM, NTM (Flat), and NTM (Base) P/E stands at 6.8x, 6.3x, and 5.1x, respectively. A safe 20% earnings yield is a winner by my grade in any market. The correction for the share count and excess capital is illustrated below:

However, I don’t want to suggest that $20 million is likely to be returned to shareholders in the near-term. Although the company is financially capable of returning this capital, I fully expect this conservative capital position to be maintained indefinitely. It is for this reason that we can take comfort in our downside risk – regardless of how the credit cycle performs over the next few years – GCBC’s balance sheet can withstand some serious punishment.

Credit: No Evidence of Excessive Risk

Since we’re on the topic of downside risk, I should touch on the credit culture/profile of the bank. I say briefly only because there isn’t anything particularly eye popping about GCBC – which is a good thing. This credit cycle is certainly long in the tooth, and when credit does deteriorate, GCBC seems to be positioned very well to absorb losses. Over the last credit cycle, net charge-offs peaked at about 29bps. The company maintains Loan Loss Reserves in the ballpark of 1.7%-1.8%. Nonaccruals peaked at 117bps of total loans. Overall, I would characterize GCBC’s credit profile as clean and conservatively managed – again supporting the thesis that the bank would be fine if not opportunistic if the credit cycle heads south in a hurry.

Management: Betting on Themselves

A company as small as GCBC isn’t going to have great IR, so outside of the numbers, we don’t have much on which to judge the quality of management. Insider ownership/activity does provide some clues, and the report card is flawless. Total insider ownership is decent at about 22% of adjusted shares outstanding (adjusted to exclude the MHC interest). There are no reported instances of insider selling and many instances of insider buying in 2017. I will note that some of the reported transactions are humorously small. CEO Don Gibson purchased 22 shares on 8/8/2017, a transaction size of about 500 bucks – maybe there’s a rationale there that escapes me.

Fair Value of GCBC

Taking into account much of what is described above, I’ve put pen to paper on a fair value of GCBC and margin of safety prevalent in today’s prices. I’ve cut the fair value of GCBC using two different methodologies appropriate for bank valuation.

Relative Valuation

First, I’ll briefly describe GCBC’s valuation relative to its peer set of thrifts/savings banks. Important that our comparison banks reflect a similar underwriting and loan mix as GCBC, so I’ve filtered the publicly traded list to correct for this. Additionally, my filtered peer set includes other thrifts with MHC ownership, so I’ve made the same market cap adjustment to reflect the true shares outstanding. All said and done, at the time of this writing, the peer group trades a median multiple of a little over 18x last year’s earnings. Applying this same multiple to GCBC, we land at a price of about $52 per share, suggesting that GCBC trades at over a 50% discount to fair value – quite significant – and I would argue for the reasons described above that GCBC should trade at a premium to its peer group.

Discounted Capital Valuation

I’ve put together what I would describe as the bank equivalent of a plain-vanilla DCF valuation. I say 'bank equivalent' because of the unique characteristics that impact bank economics: regulatory capital requirements (driven off of GAAP accounting), loan loss reserves, etc. Rather than laying out every assumption in the model, I’ll focus only on the core pieces:

  • 10 year financial projection that assumes annual growth descends to a GDP growth rate by Y6
  • 10 year total deposit CAGR of 6.3%
  • Loan to Deposit ratio held fixed to reported FY 2017
  • Investment, loan, and borrowing spreads are held constant to FY 2017
  • Interest rate trajectory consistent with the latest Fed dot plot (June 2017)
  • Mix of new originations consistent with reported FY 2017 balances
  • Total Basel III RWA / Total Assets of ~57%, consistent with FY 2017
  • Total Capital Ratio held fixed to FY 2017 – despite opportunity to return capital to shareholders
  • Allowance for Loan Losses / Total Loans = 1.80% in all years
  • Charge-Offs / Total Loans = 0.18% annually
  • Efficiency Ratio no lower than 50% in all years

I would characterize these assumptions as conservative. Plenty of reasons to believe that management will be able to deliver better growth in the medium-term than the 6.3% base case assumption. And as with any bank in this environment, a higher interest rate trajectory would have a significant positive impact on projected returns.

My base case assumptions above suggest a significant margin of safety. Assuming a CAPM derived required return of 11.2%, my estimate of GCBC’s fair value is as follows:

Of course, as with all models, I will be wrong about some (or all) of the assumptions above, so I’ve re-run the valuation, sensitizing two variables that are particularly impactful: deposit growth and projected interest rates. For deposits, I run the growth rate all the way down to a GDP equivalent, basically laying out a scenario that assumes the company’s recent success was an anomaly never to be repeated. For interest rates, under the most bearish scenario, I shocked the current Fed projection by 100bps in all years, severely impacting the net interest margin of the business. Taking the most extreme assumption for both cases, we still reach a fair value in excess of the current share price (~$29 per share).

Key Risks to the Valuation

Having highlighted many of the key variables of consideration, it’s worth spending some time discussing certain unknowns (at least to this author) that could materially impact GCBC’s return in the future.

Fed policy obviously has a significant impact on the returns of all banks, and I don’t have an edge here, and frankly, very few do. Maybe rates go up more than expected? Maybe they don’t? I don’t really care, as there’s still a compelling investment rationale whether rates are high or low. However, if rates do plummet, the market will take GCBC lower along with the rest of the sector.

Second, banking distribution is fundamentally changing. It should be obvious to most bank investors that physical branches are less critical to distribution than they were 20 years ago, and digital banking is playing an increasingly prominent role. Again, I don’t really have an edge here either, and I could not possibly handicap the timing and nature of these changes. I do know that the large institutions can invest to stay ahead of these trends, but smaller banks cannot really afford to do so. That said, this is a long-term trend, and we’ll likely have made our money back in earnings before this negatively impacts GCBC.

Conclusion

GCBC is one of the better prospects I’ve uncovered this year, and I’m definitely buying at these levels. As the company continues to execute and expand, I expect this will start to grab the attention of the institutional investors that play in the micro-cap arena. When it does, we should see multiple normalization. Based on the two valuation methodologies discussed in this article, shares of GCBC would have to appreciate between 110% and 130% to reach fair value. We have room to run.

This article is part of Seeking Alpha PRO. PRO members receive exclusive access to Seeking Alpha's best ideas and professional tools to fully leverage the platform.

Disclosure: I am/we are long GCBC.

I wrote this article myself, and it expresses my own opinions. I am not receiving compensation for it (other than from Seeking Alpha). I have no business relationship with any company whose stock is mentioned in this article.



from Seeking Alpha Editors' Picks stocks http://ift.tt/2a97jA2
via IFTTT

Are You Sure You Want Medicare for All?: New at Reason

http://ift.tt/2wn07tJ

Expanding existing government healthcare systems would also spread the reach of their already messy problems.

Senator Bernie Sanders (I-Vt.) plans to unveil his long-awaited "Medicare for all" proposal for government-controlled, single-payer health care. His colleague, Sen. Elizabeth Warren (D-Mass.), is all-in on the scheme. "Medicare for All is one way that we can give every single person in the country access to high quality health care," she writes. "Everyone is covered. Nobody goes broke paying a medical bill. Families don't have to bear the costs of heartbreaking medical disasters on their own."

And for starting us along the path to all of that high-quality care, she adds, "We owe a huge debt to President Obama."

Well, there is something there. Debt, that is. Huge, accumulating mounds of it, swamping everything in sight. In 2001, the Congressional Budget Office warned that spending on retirees—specifically Social Security and Medicare—"will consume…almost as much of the economic output in 2030 as does the entire federal government today."

"Notwithstanding recent favorable developments," the Medicare Trustees conceded in their report this year, "current-law projections indicate that Medicare still faces a substantial financial shortfall that will need to be addressed with further legislation." The report foresees that "the trust fund becomes depleted in 2029."

View this article.



from Hit & Run : Reason Magazine https://reason.com/blog
via IFTTT

List of Great EE Books Suggested by Top Profs. From MIT, Stanford, Harvard,

http://ift.tt/2t5AWOn

Blog

September 4, 2017

Today we are very excited to announce that we have received list of books from a top expert in the field of electronics. Professor Ali Niknejad is an Electrical Engineering and Computer Sciences Professor from the University of California, Berkeley. Here is a short biography of Professor Niknejad:

Niknejad
Ali Niknejad
Professor, Electrical Engineering and Computer Sciences, University of California, Berkeley. Ph.D. - UC Berkeley

Ali M. Niknejad was born in Tehran, Iran and moved to the San Diego, CA at the age of 12. He received the B.S.E.E. degree from the University of California, Los Angeles, in 1994, and his Master’s and Ph.D. degrees in electrical engineering from the University of California, Berkeley, in 1997 and 2000. He is currently a professor in the EECS department at UC Berkeley and faculty director of the Berkeley Wireless Research Center (BWRC). Prof. Niknejad is the recipient of the 2012 ASEE Frederick Emmons Terman Award for his textbook on electromagnetics and RF integrated circuits. He is also the co-recipient of the 2013 Jack Kilby Award for Outstanding Student Paper for his work on an efficient Quadrature Digital Spatial Modulator at 60 GHz and the 2010 Jack Kilby Award for Outstanding Student Paper for his work on a 90 GHz pulser with 30 GHz of bandwidth for medical imaging, and the co-recipient of the Outstanding Technology Directions Paper at ISSCC 2004 for co-developing a modeling approach for devices up to 65 GHz. He is a co-founder of HMicro and inventor of the REACH(™) technology, which has the potential to deliver robust wireless solutions to the healthcare industry. His research interests lie within the area of wireless and broadband communications and biomedical imaging. His focus areas of his research include analog, RF, mixed-signal, mm-wave circuits, device physics and compact modeling, and numerical techniques in electromagnetics. Professor Niknejad is an IEEE Fellow.


August 26, 2017

Today we are supper excited to provide you a list of books from a world-renown expert in error control coding. He is the inventor of Turbo codes. We contacted Professor Claude Berrou of Telecom-Bretagne in France and he kindly sent us a list of some fantastic books in error control coding. Here is a short biography of Professor Berrou:

Berrou
Claude Berrou
Professor of Electrical Engineering, Telecom-Bretagne, France

Claude Berrou is a French Professor of Electrical Engineering at Telecom-Bretagne and is the inventor of Turbo Codes. Turbo Codes are a class of high-performance forward error correction (FEC) codes developed around 1990–91 (but first published in 1993), which were the first practical codes to closely approach the channel capacity, a theoretical maximum for the code rate at which reliable communication is still possible given a specific noise level. Turbo codes are used in 3G/4G mobile communications (e.g., in UMTS and LTE) and in (deep space) satellite communications as well as other applications where designers seek to achieve reliable information transfer over bandwidth- or latency-constrained communication links in the presence of data-corrupting noise.

Professor Berrou’s current research topics, besides algorithm/silicon interaction, are electronics and digital communications at large, error correction codes, turbo codes and iterative processing, soft-in/soft-out (probabilistic) decoders and computational neurosciences (since 2008).

Here are some of his major awards:

  • The SEE Ampère Medal (1997).
  • The Golden Jubilee Award for Technological Innovation of IEEE Information Theory Society (1998), together with Alain Glavieux and Punya Thitimajshima.
  • The IEEE Richard W. Hamming Medal (2003), together with Alain Glavieux.
  • The French Grand Prix France Télécom of Académie des sciences (2005).
  • The Marconi Prize (2005).
  • Nominated for the European Inventor of the Year Award (2006).
  • Elected a member of the French Academy of Sciences in 2007.
  • IEEE Fellow in 2008.

August 19, 2017

Today we are very excited to publish a list of some great books from a wonderful and accomplished scientist. We contacted Professor Martin E. Hellman of Stanford University and he kindly provided a list of books. Here is a short biography of Professor Hellman:

Hellman
Martin E. Hellman
Professor Emeritus of Electrical Engineering at Stanford University, Ph.D. - Stanford

Martin E. Hellman is a Professor Emeritus of Electrical Engineering at Stanford University. He is a world-renowned expert in cryptography. Hellman and Whitfield Diffie’s paper New Directions in Cryptography was published in 1976. It introduced a radically new method of distributing cryptographic keys, which went far toward solving one of the fundamental problems of cryptography, key distribution. It has become known as Diffie–Hellman key exchange. Hellman has been a longtime contributor to the computer privacy debate. He and Diffie were the most prominent critics of the short key size of the Data Encryption Standard (DES) in 1975. Hellman also served (1994–96) on the National Research Council’s Committee to Study National Cryptographic Policy, whose main recommendations have since been implemented. Here are some of his major awards:

  • 1981: IEEE Donald G. Fink Prize Paper Award (together with Whitfield Diffie)
  • 1997: he was awarded The Franklin Institute’s Louis E. Levy Medal
  • 1998: Hellman was a Golden Jubilee Award for Technological Innovation from the IEEE Information Theory Society
  • 2000: he won the Marconi Prize for his invention of public-key cryptography to protect privacy on the Internet, also together with Whit Diffie
  • 2010: he wone the IEEE Richard W. Hamming Medal
  • 2011: he was inducted into the National Inventors Hall of Fame.
  • 2011: he was made a Fellow of the Computer History Museum for his work, with Whitfield Diffie and Ralph Merkle, on public key cryptography
  • 2015: he won the Turing Award together with Whitfield Diffie.

August 12, 2017

Today we are happy to announce that we have added a new section in the website calledFacts. Under this new section we are going to add interesting facts and news about new books, top experts, top schools and many more. In our first post under this section for have a short study on the ranking of the universities that we have on Dorado List. Based on some websites that rank universities, we have shown why we have chosen the universities such as MIT, Stanford, Oxford and others on Dorado list. We hope that you find this section interesting and useful.

As always we love to hear your comments and feedback and if you have any suggestion on what to add to the website, please let us know by sending us an email.


August 7, 2017

We have just added list of many books under theUniversity of Cambridge. Here are the categories of the books that we have added under Cambridge:

  • Programming
  • Power Electronics
  • Circuits
  • Digital
  • Routing
  • Computer Science
  • Multiprocessor
  • Security
  • Photonics
  • Computer Vision
  • Natural Language Processing (NLP)

We hope you find the list of books useful. As always we love to hear your comments and feedback.


August 4, 2017

When we received an answer from a legend in Information Theory, we congratulated ourselves. Professor Shlomo Shamai (Shitz) kindly responded to our request and provided us with a great list of books in Information Theory and Communications. Here is a short biography if this fantastic Information Theorist:

Shamai
Shlomo Shamai (Shitz)
Distinguished Professor, William Fondiller Professor of Telecommunications, Department of Electrical Engineering, Technion-Israel Institute of Technology. Ph.D. - Technion—Israel Institute of Technology

Professor Shlomo Shamai (Shitz) is a distinguished professor at the Department of Electrical engineering at the Technion − Israel Institute of Technology. Professor Shamai is an information theorist and winner of the 2011 Shannon Award. He received the B.Sc., M.Sc., and Ph.D. degrees in electrical engineering from the Technion, in 1975, 1981 and 1986 respectively. He is an IEEE Fellow and a member of the International Union of Radio Science.

Here are some of his major awards:

  • 1999 van der Pol Gold Medal of URSI
  • 2000 co-recipient of the IEEE Donald G. Fink Prize Paper Award
  • 2003 Joint IT/COM Societies Paper Award
  • 2004 Joint IT/COM Societies Paper Award
  • 2007 Information Theory Society Paper Award
  • 2009 The European Commission FP7, Network of Excellence in Wireless COMmunications (NEWCOM++) Best Paper Award
  • 2010 Thomson Reuters Award for International Excellence in Scientific Research
  • 2011 Claude E. Shannon Award from the IEEE Information Theory Society
  • 2014 Rothschild Prize in Mathematics/Computer Sciences and Engineering
  • 2017 IEEE Richard W. Hamming Medal

July 29, 2017

Today we are supper excited to announce that we have received list of suggested books from a world-rekowned expert in communication. Dr. Nambi Seshadri, former CTO of Mobile & Wireless group at Broadcom, kindly sent us the list of books that he thinks are useful for us as students and engineers. Here is a short Biograghy of Dr. Seshadri:

Seshadri
Nambi Seshadri
Former CTO, Mobile & Wireless, Broadcom Corporation, Ph.D. - Rensselaer Polytechnic Institute

Nambi Seshadri is currently Professor of Electrical and Computer Engineering at University of California, San Diego (UCSD). He also serves as a senior technical advisor to Quantenna Communications and a few start ups.

Prior to joining UCSD, he was with Broadcom corporation which he joined in 1999. He was the first employee dedicated to developing the company’s wireless strategy which initially began with wireless connectivity products and subsequently entered the cellular baseband market. As CTO of the Mobile Platforms and Wireless Connectivity business groups, he helped drive Broadcom’s entry into 2G and 3G cellular, mobile multimedia, low power Wi-Fi for handsets, combo chips that integrate multiple wireless connectivity technologies, GPS, 4G technologies, as well as development of a strong IPR portfolio.

Prior to joining Broadcom, he served more than 13 years with AT&T, first as a member of the technical staff in the Signal Processing Research Department of AT&T Bell Laboratories and later as Head of Communications Research at AT&T Shannon Labs. His research has been focused on developing techniques for reliable transmission of data, speech, and audio for mobile communications.

During the first few years at Bell Labs, his research collaborations resulted in novel techniques for understanding the impact of channel errors on low bit rate speech coders resulting in combined speech and channel coding and decoding solutions. In the 1990s, he co-invented space-time trellis codes with Vahid Tarokh and Robert Calderbank and their paper on this topic won the 1999 IEEE Information Theory Society Best Paper Award.

Another paper on the implementation of a modem based on space-time coding (co-authored with Tarokh, Calderbank and Ayman Naguib) was selected by IEEE Communications Society in 2002 as one of the 50 most influential works published by IEEE Communications Society in its first 50 years - The Best of Best: 50 Years of Communications and Networking Research.

These and additional works on space-time codes from AT&T and other institutions resulted in the rapid establishment of space-time codes as an important area of wireless communications.

He also helped drive adoption of hybrid ARQ in EDGE cellular transmission as a technique for robust link adaptation.

Nambi received a B.E. degree in Electronics and Communications from Regional Engineering College (now called NITT), Tiruchirapalli, India, and a M.S. and Ph.D. from Rensselaer Polytechnic Institute, Troy, NY. He is a Member of National Academy of Engineering, a Fellow of IEEE, Distinguished alumnus of National Institute of Technology, Tiruchirapalli, India and holds more than 75 patents.


July 26, 2017

Today we are happy to announce that we have added new books that are being or have been used at Oxford. The books are in the following categories:

  • Math
  • Signals and Systems
  • Artificial Intelligence
  • Computer Architecture
  • Computer Graphics

We hope you find the new list useful. As always we love to hear from you. Please send us your comments and suggestions, and name of top experts whom you think we should ask for book list suggestions.


July 23, 2017

Today we are super excited to announce that we have received list of books from one of the great American scientist icons. When you see the list of his accomplishments, you would agree that he is a true legend. Professor Carver Mead has been very kind to us and encouraged us to continue working on Dorado List and he thinks this is a useful service to the scientific community. Here is a short biography of this wonderful pioneer:

Craver Mead
Craver Mead
Gordon and Betty Moore Professor Emeritus of Engineering and Applied Science at the California Institute of Technology (Caltech). Ph.D. - Caltech

A pioneer of modern microelectronics, Carver Mead has made contributions to the development and design of semiconductors, digital chips, and silicon compilers, technologies which form the foundations of modern very-large-scale integration chip design. In the 1980s, he focused on electronic modeling of human neurology and biology, creating “neuromorphic electronic systems.” Mead has been involved in the founding of more than 20 companies. In 1960, he was the first person to describe and demonstrate a three-terminal solid-state device based on the operating principles of electron tunneling and hot-electron transport. In 1966, Mead designed the first gallium arsenide gate field-effect transistor using a Schottky barrier diode to isolate the gate from the channel. Mead is credited by Gordon Moore with coining the term Moore’s law. In 1968, Mead demonstrated, contrary to common assumptions, that as transistors decreased in size, they would not become more fragile or hotter or more expensive or slower. Rather, he argued that transistors would get faster, better, cooler and cheaper as they were miniaturized. Mead was the first to predict the possibility of storing millions of transistors on a chip. Mead was one of the first researchers to investigate techniques for very-large-scale integration, designing and creating high-complexity microchips. He taught the world’s first VLSI design course, at Caltech in 1970. He co-authored the landmark text “Introduction to VLSI systems”, published in 1979. This was A pioneering textbook, it has been used in VLSI integrated circuit education all over the world for decades. Mead and his Ph.D. student David L. Johannsen created the first silicon compiler, capable of taking a user’s specifications and automatically generating an integrated circuit. Next, he worked with Professor John Hopfield and Nobelist Richard Feynman, helping to create three new fields: Neural Networks, Neuromorphic Engineering, and the Physics of Computation. As the space is limited we leave interested readers to read more about this amazing scientist on his wikipedia page. Here is a list of some of his major awards:

  • 2015, Fellow, National Academy of Inventors (NAI) for his “unparalleled commitment to excellence in academic invention.”
  • 2011, BBVA Foundation Frontiers of Knowledge Award of Information and Communication Technologies “… for his influential thinking in silicon technology. His work has enabled the development of the microchips that drive the electronic devices (laptops, tablets, smartphones, DVD players) ubiquitous in our daily lives.”
  • 2005, Progress Medal of the Royal Photographic Society
  • 2002, National Medal of Technology
  • 2002, Fellow of the Computer History Museum “for his contributions in pioneering the automation, methodology and teaching of integrated circuit design”.
  • 2001, Dickson Prize in Science
  • 1999, Lemelson-MIT Prize
  • 1997, Allen Newell Award, Association for Computing Machinery
  • 1996, John Von Neumann Medal, Institute of Electrical and Electronics Engineers
  • 1996, Phil Kaufman Award for his impact on electronic design industry
  • 1992, Award for Outstanding Research, International Neural Network Society
  • 1985, John Price Wetherill Medal from The Franklin Institute, with Lynn Conway
  • 1985, Harry H. Goode Memorial Award, American Federation of Information Processing Societies
  • 1984, Harold Pender Award, with Lynn Conway
  • 1981, Award for Achievement from Electronics Magazine, with Lynn Conway
  • 1971, T.D. Callinan Award, In recognition of an outstanding contribution to the literature of dielectrics.”

We are sue you will find his list very interesting, useful and exciting. Please don’t hesitate to contact us and introduce other great scientists and we try to contact them and ask for great list of books.


July 15, 2017

Today we are happy to announce that we have added more list of books to the list of books that are being or have been used at USC. We have added books in the following categories:
- Computer Security
- Programming
- Math
- Graphics
- Algorithm
- Probability
- Artificial Intelligence
-Computing

We hope you find the list of books that are being used or have been used at USC helpful. As usual we love to hear your comments and suggestions, please kindly send us an email and we try to get back to you as soon as possible.


July 14, 2017

Today we are very excited to announce that we have added one more university to the list of top universities and that is the University of Cambridge, which is one of the top universities in the UK and in the world. For now we have added books that are being used or have been used at Cambridge in the following categories:
- Electronics
- Control
- Algorithm
- Discrete Math
- Computer Graphics
- Operating Systems

We hope you find the list of books that are being used or have been used at the University of Cambridge useful.


July 12, 2017

Today we are happy to report that we have added new list of electrical engineering books under UCLA link. The new categories that we have added are:
- VLSI
- Verilog
- Math
- Network
There you will see list of books that are being used as a reference in these categories at UCLA.


July 9, 2017

Today we are extremely happy to be able to post the list suggested by a world-renowned expert in Analog Electronics. Professor Boris Murmann of Stanford university kindly accepted to send us his list of favorite books. Here is a short biography of this wonderful expert:

Murmann
Boris Murmann
Professor, Electrical Engineering, Stanford University. Ph.D. - UC Berkeley

Boris Murmann is a Professor of Electrical Engineering at Stanford University. He joined Stanford in 2004 after completing his Ph.D. degree in electrical engineering at the University of California, Berkeley in 2003. From 1994 to 1997, he was with Neutron Microelectronics, Germany, where he developed low-power and smart-power ASICs in automotive CMOS technology. Since 2004, he has worked as a consultant with numerous Silicon Valley companies. Dr. Murmann’s research interests are in mixed-signal integrated circuit design, with special emphasis on sensor interfaces, data converters and custom circuits for statistical inference. In 2008, he was a co-recipient of the Best Student Paper Award at the VLSI Circuits Symposium and a recipient of the Best Invited Paper Award at the IEEE Custom Integrated Circuits Conference (CICC). He received the Agilent Early Career Professor Award in 2009 and the Friedrich Wilhelm Bessel Research Award in 2012. He has served as an Associate Editor of the IEEE Journal of Solid-State Circuits, as well as the Data Converter Subcommittee Chair and the Technical Program Chair of the IEEE International Solid-State Circuits Conference (ISSCC). He is the founding Faculty Co-Director of the Stanford SystemX Alliance and the faculty director of Stanford’s System Prototyping Facility (SPF). He is a Fellow of the IEEE.


July 4, 2017

Today we have added a lot of books under Princeton University.

As usual we will become better if you help us to spread the words about Dorado List. Please share our links on social media and let your friends to know about us, we really appreciate it. Also we love to hear from you. If you have suggestions and comments please send us an email, we read all the emails we receive and we try hard to respond to all the emails.

Please spread the word by sharing on social media:
Share on FacebookTweetShare on LinkedInShare on Google+Submit to RedditPin itAdd to Pocket

July 3, 2017

Today we are extremely happy and honored to show you the list of books suggested by one of the Signal Processing icons, someone whom many of us have known for many years and have read his books both in undergrad and graduate years. Professor Alan V. Oppenheim of MIT is not only a great scholar and a fantastic teacher, but also has been extremely kind to us by his encouragements as well as his very intelligent suggestions. We are very thankful for his kindness and help. Please see his suggestions under Signal Processing and Math. Here is a short biography of him:

Oppenheim
Alan V. Oppenheim
Ford Professor of Engineering, Electrical Engineering and Computer Science (EECS), MIT. Sc.D. - MIT

Professor Alan V. Oppenheim is a principal investigator in the Research Laboratory of Electronics (RLE) at the Massachusetts Institute of Technology (MIT). He received the S.B. and S.M. degrees in 1961 and the Sc.D. degree in 1964, all in electrical engineering, from the Massachusetts Institute of Technology. He is also the recipient of an honorary doctorate from Tel Aviv University.

In 1964, Professor Oppenheim joined the faculty at MIT, where he is currently Ford Professor of Engineering in the Department of Electrical Engineering and Computer Science. Since 1967 he has been affiliated with MIT Lincoln Laboratory and since 1977 with the Woods Hole Oceanographic Institution. His research interests are in the general area of signal processing and its applications. He is coauthor of the widely used textbooks Digital Signal Processing, Discrete-Time Signal Processing which is in it’s third edition and Signals and Systems which is in it’s second edition. He is also editor of several advanced books on signal processing.

Dr. Oppenheim is a member of the National Academy of Engineering, a Life Fellow of the IEEE, a member of Sigma Xi and Eta Kappa Nu. He has been a Guggenheim Fellow and a Sackler Fellow at Tel Aviv University and a MacVicar Fellow at MIT. He has received a number of awards for outstanding research and teaching, including the IEEE Education Medal, the IEEE Jack S. Kilby Signal Processing Medal, the IEEE Centennial Medal, and the IEEE Third Millennium Medal. He has received the Society Award, the Technical Achievement Award and the Senior Award of the IEEE Society on Acoustics, Speech and Signal Processing. He has also received a number of awards at MIT for excellence in teaching, including the Bose Award and the Everett Moore Baker Award.


June 29, 2017

Today we are extremely happy and honored to post the list suggested by an icon in the field of Signal Processing. When we received a response from Professor Bernard Widrow of Stanford university we all jumped up, and we knew that Dorado List is going in the right directions. We are sure many people in the EE field know or have heard of Professor Widrow. Here is a short biography of this wonderful expert:

Widrow
Bernard Widrow
Professor Emeritus, Electrical Engineering Department, Stanford University. Ph.D. - MIT

Bernard Widrow (born December 24, 1929) is a U.S. Professor Emeritus of electrical engineering at Stanford University. He is the co-inventor of the Widrow–Hoff least mean squares filter (LMS) adaptive algorithm with his then doctoral student Ted Hoff. The LMS algorithm led to the ADALINE and MADALINE artificial neural networks and to the backpropagation technique. He made other fundamental contributions to the development of signal processing in the fields of geophysics, adaptive antennas, and adaptive filtering. Here is a list of some of his honors:
- Elected Fellow IEEE, 1976
- Elected Fellow AAAS, 1980
- IEEE Centennial Medal, 1984
- IEEE Alexander Graham Bell Medal, 1986
- IEEE Neural Networks Pioneer Medal, 1991
- Inducted into the National Academy of Engineering, 1995
- IEEE Signal Processing Society Award, 1999
- IEEE Millennium Medal, 2000
- Benjamin Franklin Medal, 2001
Note the above biography is based on this this link.

For little longer biography please visit this link or this link.


June 28, 2017

Today we are happy to announce that we have added more lists to our website. Now under all the universities there are some book categories filled with list of books. Again, this is a work in progress but we are updating and increasing the number of lists almost daily. Here is what we added today:

Under USC we added these fields:
- Operating System
- Programming

We added the following fields under Oxford:
- Optimization
- Machine Learning
- Computer Vision

USC_OXFORD

In order to improve the lists, you can help us by suggesting top experts in different fields. Please send us an email and let us know who do you think is a top expert in a field that you are familiar with. We will do some investigation to see if that person is really a top expert and then try to contact her/him to ask for a book list.


June 27, 2017

Today we added some lists to the UCLA and USC links. This is not complete by any means but as we go we add more and more stuff to the lists so check back often and see what we have added. If you want to be notified automatically send us a message with notify in the subject line.

Under UCLA these are some fields that we added today:
- Digital Logic
- Machine Learning
- Artificial Intelligence
- Monte Carlo Methods
- Computer Vision

Under USC these are some fields that we added today:
- Database
- Computer Graphics
- Computer Networks
- Information Retrieval

We hope you find the site useful and we love to hear your comments and feedback.


June 26, 2017

Today we have added list of some of the books that are being used at Harvard. The lists are not complete, but we are gathering data on a daily basis and will update the site very often. So check back soon to see what has been added.

Our other announcement is that we have received a lot of messages, comments, suggestions and encouragements from our readers. First of all we would like to say a big thanks to you all. Your messages makes us very happy and proud, and your suggestions will help us improve our website. There are a few points that we would like to make:

The layout and the user interface of the website
We know that the website layout and UI is not perfect, but for sure we will look at all the comments and feedback from our lovely readers and will improve the website layout as soon as we can.

Not very experts on the website yet
As we have just started the website, we recognize that there are not many experts and their lists on our website yet, but for sure we will increase the number of lists suggested by top experts in a regular basis. We have contacted a number of experts and are waiting for their responses. We will contact more experts soon, and this will be an ongoing effort, so check back often. If you know a top expert personally, and you can refer us to her/him, we would really appreciate it. Please send us a message, telling us the name of the expert. We will do some research to see if s/he is a top expert and if so we will contact that expert for a list of books.

No list for many fields
That is the case, even in electrical engineering we don’t have lists for all sub-fields of electrical engineering. But we are working hard to gather more lists to address this issue. We also have plans to extend the list to other categories, not only electrical engineering and computer science. We will add lists from other engineering discipline such as Mechanical engineering. Also our longer plan will includes, arts, sciences, humanity, and many more areas.

Please Spread the Word About Dorado List
The more user and reader we have, the better we get. So please help us by spreading the word about us, it will help us to make this website a better place :)

We finish this post with this picture, as Dorado List is filled with love towards our readers.


June 25, 2017

Today we are very happy to announce that we have received list of suggested books by one of the most famous top experts in the field of error control coding, Professor Shu Lin.

Lin
Shu Lin
Adjunct Professor, Department of Electrical and Computer Engineering, University of California, Davis. Ph.D. - Rice University

Professor Lin is currently an Adjunct Professor at University of California, Davis, California. He has published at least 800 technical papers in prestigious refereed technical journals and international conference proceedings. He is the author of the book, An Introduction to Error-Correcting Codes (Englewood Cliff, NJ: Prentice-Hall, 1970). He also co-authored (with D. J. Costello) the book, Error Control Coding: Fundamentals and Applications (Upper Saddle River, NJ: Prentice-Hall, 1st edition, 1982, 2nd edition, 2004), the book (with T. Kasami, T. Fujiwara, and M. Fossorier), Trellises and Trellis-Based Decoding Algorithms, (Boston, MA: Kluwer Academic, 1998), and the book, Channel Codes: Classical and Modern (Cambridge University Press 2009).

Dr. Lin was elected to IEEE (Institute of Electrical and Electronic Engineering) Fellow in 1980 and Life Fellow in 2000. In 1996, he was a recipient of the Alexander von Humboldt Research Prize for U.S. Senior Scientists and a recipient of the IEEE Third-Millennium Medal, 2000. In 2007, he was a recipient of The Communications Society Stephen O. Rice Prize in the Field of Communications Theory.


June 22, 2017

Today we have added the list of books suggested by a top expert in the area of Electronics. Dr. Hooman Darabi of Broadcom has kindly sent us his list of suggested books.

Darabi
Hooman Darabi
Senior Technical Director and Fellow at Broadcom. Ph.D. - UCLA

Dr. Darabi is a well known expert in the field of analog and RF IC design for wireless communications. He is a Fellow of IEEE and a Broadcom Fellow and has authored/co-authored a number of books. Currently he is a Senior Technical Director at Broadcom Limited and an Assistant Adjunct Professor at Electrical Engineering and Computer Science department of University of California, Los Angeles.


June 18, 2017

Today we added list of many books used by professors at the University of Toronto (UofT). The list includes books in categories such as Analog Electronics, DSP, Control theory and many more. You can click on each book image to go to Amazon and see full details of that book. We are working to add more books to the UofT list of books, so please come back often and check the list, or you can send us an email and ask us to add you to the list of people who will get notified when new books are added to our lists.


June 8, 2017

Today we added list of suggested books by two world-renowned experts, one in communications and one in electronics. Professor Behzad Razavi of UCLA and Professor Hamid Jafarkhani of UC Irvine kindly accepted our request and sent us their list. Here is a short biography for each of these experts:

Razavi
Behzad Razavi
Professor, Electrical Engineering, University of California, Los Angeles. Ph.D. - Stanford

Behzad Razavi is a professor and researcher of electrical and electronic engineering. Noted for his research in communications circuitry, Razavi is the director of the Communication Circuits Laboratory at the University of California Los Angeles. He is a Fellow and a distinguished lecturer for the Institute of Electrical and Electronics Engineers (IEEE). Among his awards, Razavi is a two-time recipient of the Beatrice Winner Award for Editorial Excellence at the 1994 and 2001 International Solid-State Circuits Conferences. He is the author/editor of seven books and is recognized as one of the top 10 authors in the 50-year history of ISSCC.

Jafarkhani
Hamid Jafarkhani
Chancellor’s Professor, Electrical Engineering and Computer Science, University of California, Irvine. Ph.D. - University of Maryland at College Park

Hamid Jafarkhani is a Chancellor’s Professor in electrical engineering and computer science at the University of California, Irvine’s Henry Samueli School of Engineering. His research focuses on communications theory, particularly coding and wireless communications and networks. Within the wireless communications field, Jafarkhani is best known for his contributions to two seminal papers which established the field of space–time block coding, published whilst working for AT&T. The first of these, “Space–time block codes from orthogonal designs”, established the theoretical basis for space–time block codes and the second, “Space–time block coding for wireless communications: performance results”, provided numerical analysis of the performance of the first such codes. Jafarkhani received a National Science Foundation CAREER award in January 2003. He is also a Fellow of the Institute of Electrical and Electronics Engineers (IEEE), an editor of IEEE Transactions on Wireless Communications and an associate editor of IEEE Communications Letters. Jafarkhani is the author of “Space-Time Coding: Theory and Practice”. He is one of the Top 10 Most Cited Researchers in Computer Science according to the ISI web of science.


June 4, 2017

Dorado List is live. We just released our website, we hope that you enjoy it and find it useful. If you have any questions or comments please don’t hesitate to send us an email. Over time we try to contact more top experts and ask for the list of the books they think are great reference books. We also will work to find more books that are thought in top universities and add them to our website over time.




from Hacker News http://ift.tt/YV9WJO
via IFTTT

What to Do If You Were Affected by the Equifax Hack

http://ift.tt/2gV5IBX

Image credit: Pexels

Equifax’s “security incident” earlier this week affected 143,000 Americans. That’s a huge number of people, which means that the chances that either you or someone you know being affected are pretty high. Equifax’s site was even providing positive results for fake social security numbers at one point.

If you were one of the hundreds of thousands that are impacted by the attack, then you have to figure out what to do next. CNET put together a pretty good step by step for people. Here are a few of its suggestions:

Enroll in TrustedID

Equifax is offering a free year of TrustedID to everyone. The credit monitoring service “includes 3-Bureau credit monitoring of Equifax, Experian and TransUnion credit reports; copies of Equifax credit reports; the ability to lock and unlock Equifax credit reports; identity theft insurance; and Internet scanning for Social Security numbers.”

Advertisement

Equifax faced a bit of backlash via social media when it made the offer, one because you have to wait to sign up on a specific date the company doesn’t plan on reminding you of, and two, because a clause in the terms of service of the company’s site dedicated to the hack added an arbitration clause that seemed to imply you were waving your right to sue the company for the hack if you took advantage of it.

The clause really only applied to suing them specifically for the site itself, not the hack, but the company added an opt-out feature Friday, which allows you to opt-out of giving up your right to sue by sending a letter.

The tool on the site that tells you if you were hacked might also be broken right now, so there’s that.

Check Your Credit

This breach actually happened three months ago, so there’s a chance that your information is already being used. Check your credit report and make sure there’s nothing out of the ordinary happening.

Freeze your credit

CNET suggests freezing your credit, which is a suggestion we made last week as well. If you freeze your credit, then anyone who wants to use your credit to open an account with need a special PIN number.

If you’re not planning on making any big purchases soon or opening any new credit cards, then it can be a good preventative move in keeping your credit safe.

Set a Fraud Alert

Setting up a fraud alert is another one of those things that will make using your credit a bit of a hassle, but can keep you protected. If you set up a fraud alert, then a company will have to verify your identity before they can open an account in your name.

Advertisement

You set one up by contacting a credit bureau (Equifax, Experian, TransUnion), and they last 90 days.

Keep an Eye on Your Taxes

CNET brings up a good point about watching out when you file your taxes this year. Sometimes people will use personal info to file false tax returns to get refunds. That means if you file your taxes after them, you might get a message from the IRS saying your taxes have already been filed.

If you can, make sure to file your taxes on the early side this year.



from Lifehacker http://lifehacker.com
via IFTTT

MIT to offer Degree in Computer Science + Economics

http://ift.tt/2vNWFJ8

Designing electronic marketplaces will be the focus of a new degree to be offered jointly by MIT’s economics and computer science departments.

“This area is super-hot commercially,” says David Autor, the Ford Professor of Economics and associate head of the Department of Economics. “Hiring economists has become really prominent at tech companies because they’re filling market-design positions.”

Because these companies need analysts who can decide which objectives to maximize, what information and choices to offer, what rules to set, and so on, “companies are really looking for this skill set,” he says.

UBER, Airbnb and Amazon are familiar marketplaces that need computer scientist cum economist designers and online worlds like EveOnline join multiple marketplaces into entire economies.

I’d also note that an increasing number of marketplaces will need to be designed not for people but for non-human traders–this will create entirely new challenges.

The post MIT to offer Degree in Computer Science + Economics appeared first on Marginal REVOLUTION.



from Marginal Revolution http://ift.tt/oJndhY
via IFTTT

How I Used Professional Poker to Become a Data Scientist

http://ift.tt/2f1o4B3


How I Used Professional Poker to Become a Data Scientist

Poker is a microcosm of both life and business

April 15th, 2011, is referred to as Black Friday in the poker community. It’s the day that the United States Government shut down the top three online poker sites. About 4,000 US citizens played online poker professionally back then, and thus the exodus began. Canada and Costa Rica were popular destinations. I’m from Southern California, so I’m no stranger to Baja California. I decided to set up shop south of the border in a town called Rosarito, Mexico.

As I prepared to move down to Baja, I was often asked, “What happens if this doesn’t work out?” Playing online poker requires a solid understanding of data, probability, and statistics. Back then I knew of only one other profession that utilized a similar skill set. My response was, “I’ll probably end up working as an analyst on Wall Street.”

That same month, the movie Moneyball was released. Based on Michael Lewis’s nonfiction book of the same name, the movie takes place during the 2002 season of the Oakland A’s. Using data analysis strategies similar to Wall Street analysts, the team at the A’s revolutionized baseball. They won a record 20 games in a row on a shoestring budget. This was the moment that data analytics went mainstream. One year later, Thomas H. Davenport and D.J. Patil published Data Scientist: The Sexiest Job of the 21st Century in the Harvard Business Review. Glassdoor.com has ranked data scientist as the top job in the US for 2016 and 2017.

What data analysis has in common with poker

I began transitioning to a career in data science in 2016. I’ve noticed that much of what I learned during my poker career is relevant to customer segmentation. Where a poker player is from (geographic segmentation), how the player thinks (psychographic segmentation), and how the player plays (behavioral segmentation) are all very important factors when determining a strategy against that player. I learned during my poker career that these factors could be boiled down to a couple of simple statistics. I could tell how good a player was based on just two numbers. To test this theory, I built a K-Means model to segment my poker opponents, much like a company would segment their customers.

The data for this project was generated during my playing career. I played No-Limit Texas Hold’em cash games and the stakes ranged from $25 buy in ($0.25 Big Blind) to $200 buy in ($2 Big Blind). I usually played 15–20 tables at a time, each table having eight or nine players, which resulted in about 600 hands per hour. I have the most data at the $25 buy-in games because it’s the most popular game. I used the data at this level from 2013 where I won $1,913.13 over 387,373 hands, which was a small fraction of the hands I played that year.

Each time a poker hand is played at an online poker site, a hand history is generated that explains everything that each player did during the hand. I used software called Hold’em Manager (think Tableau for poker), which downloads each of these hand histories in real time to a PostgreSQL database so you can keep track of your opponent’s tendencies. These tendencies are visualized as a Heads-Up-Display at the poker table and it looks like this:

How I used data analytics to outmaneuver my opponents

In Texas Hold’em, each player is dealt two cards at the beginning of the hand which means there are 1326 starting hand combinations you can be dealt. For those who aren’t familiar with how Texas Hold’em is played, click here for a full explanation. As a hand progresses, it’s necessary to make assumptions about the range of hands your opponent may be holding. Having statistics on an opponent’s tendencies is powerful because it makes it very easy to accurately assume your opponent’s range. For example, some players rarely raise Pre-Flop so their Pre-Flop Raise (PFR) percent is low. If an opponent has a 2% PFR, I know they only have about 26 of the 1326 starting hand combinations in their range. Since they are likely to raise with the best hands, and AA, KK, and AK have 28 combinations, I have a solid idea of what they have.

[During each poker session, I would mark any hand that confused me and go back and review it at the end of the day. For an in-depth look at how to use probability and statistics to maximize expected value using actual hands, and actual opponent statistics, click here.]

The two statistics that I focused on to determine if an opponent was a good player or not were PFR percent, mentioned above, and ‘Voluntarily Put Money in Pot’ (VP$IP) percentage. VP$IP percent is the frequency with which a player plays a hand when first given an opportunity to bet or fold. Those two stats, and the ratio between the two, gave me most of the information I needed to determine if a player was a winner (a Shark) or a loser (a Fish).

The Pareto Principle, named after economist Vilfredo Pareto, states that for many events, roughly 80% of the effects come from 20% of the causes. This suggests that 80% of a company’s profits are likely generated from about 20% of their customers, and 80% of my profits were likely generated from about 20% of my opponents.

I identified the 20% of my opponents who I had the highest win rate against (Fish), and the 20% who I had the highest loss rate against (Sharks). I built a K-means model with five clusters to segment my opponents, using eight statistics that measure important playing tendencies as variables. Once segmented, I identified the segment with the highest concentration of Fish, and the one with the highest concentration of Sharks. For each segment, I averaged the opponent’s VP$IP percent and PFR percent. My hypothesis was that the Sharks would have a VP$IP and PFR most similar to my VP$IP and PFR, and the Fish would have the highest VP$IP and biggest difference between the two stats.

The Shark

VP$IP = 15.1
PFR = 11.7%

In the Shark segment, opponents on average have a VP$IP of 15.1% and a PFR of 11.7%. The image on the top approximates what a 15.1% VP$IP range looks like, and the image on the bottom approximates an 11.7% PFR range. The hands highlighted in yellow are the hands these players typically play. As you can see, these images are similar and consist mainly of good starting hands. These players fundamentally understand two things.

  1. There is no reason to put money in the pot if you don’t have a good starting hand so it’s better to fold.
  2. When you do have a good starting hand, it is better to play aggressive and raise. The fundamental reason why playing aggressive poker is more profitable than passive poker is because betting and raising give you two ways to win; having the best hand or causing your opponents to fold. Your opponents can’t fold if you don’t bet.

These opponents cost me money at the poker table, but how might this look for a company? Let’s say we’re an online retailer selling widgets. We can probably learn a lot about our potential customers by how many pages of our website they’ve viewed along with the specific pages they’ve viewed. How each person interacts with the website will show a pattern of behavior. A segment that views a limited number of pages, and mostly pages that sell low-profit margin widgets may indicate a pattern of behavior that consistently results in low or no profit customers. Once identified, we can avoid allocating resources to these potential customers.

The Fish

VP$IP = 43.8%
PFR = 14.0%

In the Fish segment, opponents on average have a VP$IP of 43.8% which is approximated by the image on the top and a PFR of 14%, approximated by the image on the bottom. These images are not similar. These players are voluntarily putting money in the pot almost three times as often as Sharks. This indicates they are frequently playing with mediocre or even bad starting hands, and what’s worse is they’re playing them passively. Playing bad hands passively costs money at the poker table, and that money goes into my pocket. I never sat at a poker table that didn’t have at least two Fish playing.

Let’s go back to our online widget retailer analogy. What might their highest value segment look like? This segment probably views a high number of web pages, and spends time on pages that sell the widgets with the highest profit margins. High value customers might be arriving through certain landing pages, or might gravitate to certain blog posts. It could even be as simple as the time spent on the website. Once a potential customer is identified as being part of this high value segment, we’d want to allocate resources to convert them into customers, such as adding them to a targeted marketing campaign or having a salesperson reach out.


I’ve found that the game of poker is a microcosm of life. What holds true at the poker table will often have a corollary in many other aspects of life. There are a few key principles I learned during my poker career that I’ll share with you.

  • Everybody thinks they’re great at poker, but most people are terrible at it. When you make a living off people lying to themselves, you realize that everyone lies to themselves. It takes a lot of effort to be honest with yourself and others, but it’s worth it.
  • It’s always your fault. This is your first line of defense against lying to yourself. To improve at poker, you must be self-critical. You can’t learn from your mistakes if you blame them on someone or something else. If you don’t make a sale during your presentation, or don’t land that data science job, take the time to figure out what you did wrong, and don’t make that mistake again.
  • Make decisions based on logic and reason, not ego and emotion. Ego and emotion cost a lot of money at the poker table.
  • In games of luck and skill, those who gather information most efficiently and utilize it most effectively, usually win in the long run. So do your homework and take the time to research the company you’re about to interview with, and the hiring manager who’s conducting the interview.
  • Don’t be passive. The best strategy is usually selective aggression. Selective aggression at the poker table means betting and raising with strong hands and occasional bluffs, rather than passively calling or checking. In business it means asking for the sale when you send your proposal, rather than just sending the proposal and hoping the deal closes itself. Fortune favors the bold.

P.S. In case you’re curious, over the 387,373 hands played, my VP$IP was 15.6% and my PFR was 12.2%, and the Fish segment had the highest average VP$IP. You can review the code that I wrote to segment my poker opponents here.

Daniel Poston is a Springboard graduate and a data scientist based out of San Diego, CA.


If you enjoyed this post, we’re pretty sure you’ll love Springboard’s Data Science Career Track course. It’s the only online bootcamp to guarantee you a job or your money back. Enroll now.



from Hacker News http://ift.tt/YV9WJO
via IFTTT

Consistent Selenium Testing in Python

http://ift.tt/2eLl5MO


Back in April, I learned about Timestrap, a self-hostable, Django-based time-tracking project from a post on HackerNews by Isaac Bythewood. As I have been learning Python in the past year or so, I reached out to Isaac and started contributing to the project. After getting familiar with the core application, I turned my attention to testing and eventually found my way to Selenium, a collection of browser automation tools used for frontend testing.

I had never worked with Selenium or other automated testing products, so it struck me as a great opportunity to get my feet wet in something new. After getting things up and running, we quickly learned that the test results were quite inconsistent across development environments - even to a point that occasionally tests would succeed when run individually, but fail with the full test case.

After much trial and error, we have settled on a (mostly) consistent setup for testing with Selenium, Python and SauceLabs. This produces much better results than testing in development environments and crossing fingers during CI. Hopefully this primer will help others facing similar challenges (as we had a lot of trouble finding good material on the subject).

Getting Started

Use pip to install the selenium package (perhaps in a virtual environment):

Selenium needs a WebDriver before it can do anything useful. There are currently drivers for Firefox, Chrome, Edge and Safari. We originally started out with Firefox's geckodriver, but in initial attempts to fight inconsistency moved to chromedriver hoping for better results. Ultimately, both seem to have their shortfalls but we have stuck with chromedriver since the original change so that is what I will use in examples here.

Installing Chromedriver

The installation process is pretty simple, the chromedriver executable just needs to be in your path so Selenium can interact with it during testing (an installation of Chrome or Chromium is assumed here). Basically, the chromedriver command just needs to be executable on the system being used for testing. In Linux, this may look like this:

curl -L http://ift.tt/2gseS8K -o chromedriver.zip
sudo mkdir -p /usr/local/bin/
sudo unzip chromedriver.zip -d /usr/local/bin/
sudo chmod +x /usr/local/bin/chromedriver

The above set of commands

  1. downloads chromedriver,
  2. places it in a common $PATH location, and
  3. sets it to be executable.

Driving Chrome with Python

With chromedriver ready to go, all that is left is to import the WebDriver package from Selenium and tell it to use chromedriver. E.g.

from selenium import webdriver

driver = webdriver.Chrome()

Running this code should invoke a window in Chrome, but nothing will happen because this example does not give the WebDriver any instruction.

Let's try a simple task: getting the "word of the day" from Merriam-Webster's website. A quick look at the source of M-W's word of the day page reveals where the word can be found in markup:

<article>
...
    <div class="quick-def-box">
      <div class="word-header">
        <div class="word-and-pronunciation">
          <h1>confrere</h1>
          ...
        </div>
      </div>
...
</article>

So the actual word of the day, "confrere" today, is found in a h1 child of a div element with the class word-and-pronunciation. Searching the page reveals that this class is unique, so it can be used by Selenium to identify the element and get its content like so:

from selenium import webdriver

driver = webdriver.Chrome()
driver.get('http://ift.tt/2gNhvl1')
element = driver.find_element_by_css_selector('.word-and-pronunciation h1')
print(element.text)
driver.close()

Running the above should invoke a Chrome window that loads the word of the day page and then closes. The Python script should output the word before exiting. And there you have it! This example uses a CSS selector with Selenium's find_element_by_css_selector method, but there are many other find_element_by_* methods available for page "navigation".

Key Selenium Functionality

There are lots of important Selenium classes and methods that will be used extensively for testing web pages. Here is a short list of some key functionality to know about -

Finding Elements

The example above uses WebDriver.find_element_by_css_selector and there are eight of these methods in total (plus eight more in plural form):

  1. find_element_by_class_name
  2. find_element_by_css_selector
  3. find_element_by_id
  4. find_element_by_link_text
  5. find_element_by_name
  6. find_element_by_partial_link_text
  7. find_element_by_tag_name
  8. find_element_by_xpath

All of these methods are pretty descriptive (and long), so a nice helper is the WebDriver.common.By class. By can replace the longer form methods with a simpler shorthand. The previous code example could be replaced with:

from selenium import webdriver
from selenium.webdriver.common.by import By

driver = webdriver.Chrome()
driver.get('http://ift.tt/2gNhvl1')
element = driver.find_element(By.CSS_SELECTOR, '.word-and-pronunciation h1')
print(element.text)
driver.close()

While this code is not necessarily shorter, I suggest taking it a bit further and creating a wrapper method for finding elements. This should significantly reduce the effort of typing these methods out as test size and complexity increases. Here is an example wrapper I have used in test cases:

def find(self, by, value):
    elements = self.driver.find_elements(by, value)
    if len(elements) is 1:
        return elements[0]
    else:
        return elements

This uses the plural find_elements method and returns either a list or a single item depending on what is found. With this, I can use find(By.ID, 'my-id') instead of driver.find_element_by_id('my-id'). This form should produce much cleaner code, particularly when jumping between the various available find methods.

Sending Input

Most web app projects will deal with some degree of input and Selenium can support that fairly well. Every WebElement class (the result of the various find_element methods) has a send_keys method that can be used to simulate typing in an element. Let's try to use this functionality to search "Python" on Wikipedia -

A quick look at Wikipedia's page source reveals that the search input element uses the id searchInput. With this, Selenium can find the element and send some keys to it:

from selenium import webdriver
from selenium.webdriver.common.by import By

driver = webdriver.Chrome()
driver.get('http://ift.tt/1jXKT14')
el = driver.find_element(By.ID, 'searchInput')
el.send_keys('Python')

The above code should result in an open Chrome window with the Wikipedia page loaded and "Python" in the search input field. This windows stays open because the code does not include the driver.close() command that is used in previous examples.

There are a couple of different ways to actually submit a form. In general I have found no real difference between any of the options, but I tend to fall back on locating and "clicking" the form's submit button when possible. Here are some of the ways submission can be accomplished:

Submitting the form element

Taking another look at the Wikipedia source, the search form has a simple ID: search-form. This ID can be used with the WebElement.submit() method to submit the form.

Add the following to the previous example from Using Input:

form = driver.find_element(By.ID, 'search-form')
form.submit()

Running the code should leave you with a Chrome window open to Wikipedia's results page for Python.

Clicking a form submit button

The Wikiedpia search page includes a fancy, styled submit button for searching. It does not have a unique ID, so the code will need to use some other method to identify and "click" the button. It is the only button element inside search-form, so it can be easily targeted with a CSS selector.

Add the following to the previous example from Using Input:

button = driver.find_element(By.CSS_SELECTOR, '#search-form button')
button.click()

As above, this code should produce Wikipedia's results page for Python.

Pressing the enter key

Lastly, Selenium has a set of key codes that can be used to simulate "special" (non-alphanumeric) keys. These codes are found in WebDriver.common.keys. In order to submit the form, the code will need to use the return (or enter) key, so a revised version of the Wikipedia search code looks like this:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys

driver = webdriver.Chrome()
driver.get('http://ift.tt/1jXKT14')
el = driver.find_element(By.ID, 'searchInput')
el.send_keys('Python')
el.send_keys(Keys.RETURN)

Just like the two previous examples, this script should exit leaving a Chrome page open to the Wikipedia search results for "Python".

This is perhaps the cleanest way to get a form submitted because it doesn't require finding other elements, but a thorough tester may want to consider testing multiple submission methods to ensure functionality.

Clearing Input

While Selenium does offer a WebElement.clear() method, I have found it to be somewhat unreliable depending on the driver, development environment, and application being tested. In theory, it should clear input that has been sent to an element.

Let's use Selenium to load Google and search for "selenium". Google's search input element does not have a unique ID or class, but it does use a name attribute with the value "q". This can be used to find the element and send the keys:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys

driver = webdriver.Chrome()
driver.get('https://www.google.com/')
el = driver.find_element(By.NAME, 'q')
el.send_keys('selenium')
el.send_keys(Keys.RETURN)

This should produce the Google search results page for "selenium".

On the results page, the search field still has a name value of "q" and now is pre-filled with "selenium" for a value. Although the name has not changed, Selenium will need to find the element again because the page has changed. Add the following to the code -

el = driver.find_element(By.NAME, 'q')
el.clear()

Running the full code block should now result in the Google search results page with an empty search field.

Again, this will probably work most of the time but another approach is to use the Keys class to simulate tapping the delete key in the field. Let's create a utility function to handle this and modify the code slightly -

 0
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys


def clear(element):
    value = element.get_attribute('value')
    if len(value) > 0:
        for char in value:
            element.send_keys(Keys.BACK_SPACE)

driver = webdriver.Chrome()
driver.get('https://www.google.com/')
el = driver.find_element(By.NAME, 'q')
el.send_keys('selenium')
el.send_keys(Keys.RETURN)
el = driver.find_element(By.NAME, 'q')
clear(el)

The clear method here will take a WebElement, get the length of its value attribute, and simulate hitting the BACK_SPACE until all text is removed from the field.

Overall, I have found the BACK_SPACE approach to be more reliable than WebElement.clear(), which sometimes seems to simply do nothing.

Perhaps I never looked deeply enough at why clear() has been inconsistent, but I would at least advise always creating a wrapper for this sort of functionality so it can be easily modified if troubles arise in the future.

Waiting

"Waiting" in Selenium can be a deceptively complex problem. Up to this point, all examples have relied on Selenium's own ability to wait for a page to finish loading before taking any particular action. For simple tests, this may be a perfectly sufficient course. But as tests and applications become more complex this method may not always do the job.

Selenium provides some useful tools for addressing this issue -

Implicit waits

The easiest way to add some wiggle room is the WebDriver.implicitly_wait() method. This method accepts an integer input that defines how many seconds to wait when executing any of the find_element methods.

The default implicit wait is zero (or no wait), so if a particular element is not found immediately Selenium will raise a NoSuchElementException. Let's try to find an element with a name attribute "query" on GitHub (there isn't one):

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys

driver = webdriver.Chrome()
driver.get('https://www.github.com/')
el = driver.find_element(By.NAME, 'query')

This code should result in a NoSuchElementException pretty quickly after Chrome loads GitHub's homepage.

Now, let's try the code below, which sets an implicit wait time of five seconds for the same impossible task:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys

driver = webdriver.Chrome()
driver.implicitly_wait(5)
driver.get('https://www.github.com/')
el = driver.find_element(By.NAME, 'query')

This code will produce the exact same exception, but it will wait five seconds before doing so.

While these examples paint a very simple picture, the reality is that various conditions of any test environment or application will impact Selenium's ability to determine when a page is loaded or whether or not an element exists.

I recommend all tests set a 10 second implicit wait time. This should help to prevent intermittent exceptions caused by issues with underlying elements like network connection or buggy web servers.

Expected conditions (explicit waits)

When implicit waits are not enough, expected conditions are extremely valuable. The WebDriverWait class provides the until() and until_not() methods that can be used with expected_conditions to create more complex and nuanced wait conditions.

There are many expected conditions available, but the one that I have frequently come back to in my testing is presence_of_element_located().

presence_of_element_located() will take an object describing a method and locator and return true if the object exists in the DOM. This can be used with WebDriverWait.until() and a wait time (in seconds) like so:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as ec

driver = webdriver.Chrome()
WebDriverWait(driver, 5).until(ec.presence_of_element_located((By.ID, 'html-id')))

For a real example, the website webcountdown.net creates a countdown timer that creates a pop-up in the DOM when the timer finishes. Selenium can handle this using the above template:

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as ec

driver = webdriver.Chrome()
driver.get('http://ift.tt/2gsJWoW')  # Starts a 3 second timer.
if WebDriverWait(driver, 5).until(ec.presence_of_element_located((By.ID, 'popupiframe'))):
    print('Popup located!')

The above code should open the countdown page, run a three second countdown and print "Popup located!" after the three second countdown completes. This works because WebDriver is told to wait for up to five seconds for the popup to appear.

If, for example, this were modified with a two second timeout for the WebDriverWait class, Selenium would raise a selenium.common.exceptions.TimeoutException because the timer does not finish (and therefore does not create the element with ID "popupiframe") before the two seconds are up.

What is WebDriverWait good for? Briefly - single page apps (SPAs).

Testing may require traversing an app's navigational elements and if the page is not fully reloading, Selenium will need to use WebDriverWait to do things like wait for a new section or table of data to load after an AJAX-style API call.

Other expected conditions will follow pretty much the same syntax and mostly have (very) verbose names. Two of the others that I have found useful in practice are text_to_be_present_in_element() and element_to_be_clickable().

Time waits

Lastly, I have also used a workaround method to do simple, explicit time-based waits without any expected conditions. One area where this happened to come in handy for me is testing the result of a Javascript-based "stop watch" that updates in real time. As part of a test, I initiate the stop watch, wait for two seconds and then verify the displayed time to be correct. To achieve this, I created a method that essentially does an expected conditional wait that times out intentionally:

from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait

def wait(self, seconds):
    try:
        WebDriverWait(self.driver, seconds).until(lambda driver: 1 == 0)
    except TimeoutException:
        pass

This method can be used, for example to wait five seconds by calling wait(5). WebDriverWait will raise an exception after five seconds because the until() argument is a simple lambda that will always return False. By catching and passing on the exception, this method just waits for the specified number of seconds and nothing else. Handy!

Improving Consistency with SauceLabs

These basics are enough to get things going in Selenium, but over time as test complexities increase and multiple developer environments evolve, consistency will become a considerable pain. In our experience developing Timestrap, there were inconsistencies causing test failures based on development OS (Windows, OS X, Linux flavors, etc.), web drivers (Firefox, Chrome, gecko, etc.), and seemingly the phases of the moon.

After trying many different things to stabilize environments, we eventually found and started using SauceLabs. SauceLabs provides a number of services related to testing and a few free tiers for open source projects, including Cross Browser Testing. Using this service can help bring stability and consistency to Selenium tests regardless of the local development environment.

To get started, SauceLabs requires an existing, publicly accessible open source repository (e.g. on GitHub, GitLab, etc.). Use the OSS Sign Up page with the "Open Sauce" plan to get started. Once signed up and logged in, there are a couple of different ways to take advantage of SauceLabs testing:

Manual Tests

If you have an Internet accessible project available, Manual Tests can be used to poke around and get a feel for the various environments supported. This can serve as a wonderfully quick and easy way to do some prodding from a virtual browser in iOS, Android, OS X, Windows, Linux using various versions of Safari, Chrome, Firefox, Internet Explorer and Opera. Once a session is complete, the dashboard will have a log with screenshots and videos available to view or download.

Automated Tests

While manual testing is quick and convenient, automated testing is the important feature necessary to improve the consistency of Selenium tests in Python overall. Running Python's Selenium tests through SauceLabs requires three key things:

Username and Access Key

From a logged in SauceLabs account, the access key can be found on the User Settings page. This key and the associated username will need to be available in the local test environment in order to execute the Selenium-driven tests on SauceLabs.

I recommend getting used to using the environment variables SAUCE_USERNAME and SAUCE_ACCESS_KEY as these will be used by the Sauce Connect Proxy Client for local development testing.

On Linux this can be achieved with:

export SAUCE_USERNAME={sauce-username}
export SAUCE_ACCESS_KEY={sauce-access-key}

WebDriver.Remote

Selenium provides a WebDriver.Remote class for interacting with a command-based remote server running the WebDriver protocol. The class must be initialized with two arguments, command_executor, a URL pointing to the remote command point, and desired_capabilities, a dictionary of settings for the executor.

For SauceLabs, the command_executor should be set to http://ift.tt/2ey4qiV where SAUCE_USERNAME and SAUCE_ACCESS_KEY represent the properties outlined in the previous section of this post.

The desired_capabilities dictionary is used to provide the environment settings to SauceLabs. SauceLabs has a wonderful Platform Configurator tool for easily selecting from the available options.

To use the example below, the local environment must provide two variables: SAUCE_USERNAME and SAUCE_ACCESS_KEY. With these variables set, the following code will create a remote WebDriver set up to access SauceLabs using Chrome 48 on a PC running Linux:

 0
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
import os
from selenium import webdriver

# Get the user name and access key from the environment.
sauce_username = os.environ['SAUCE_USERNAME']
sauce_access_key = os.environ['SAUCE_ACCESS_KEY']

# Build the command executor URL.
url = 'http://ift.tt/2exKDjK'.format(
    sauce_username, sauce_access_key)

# Build the capabilities dictionary (from Platform Configurator).
caps = {'browserName': "chrome"}
caps['platform'] = "Linux"
caps['version'] = "48.0"

driver = webdriver.Remote(command_executor=url, desired_capabilities=caps)
driver.get('https://www.google.com')
driver.quit()

After executing the above sequence, the SauceLabs Dashboard should show a new job with video and screenshots of Chrome on Linux loading Google. Neat!

When using Chrome, a chromeOptions dictionary can also be provided in the desired_capabilities dictionary with some more specific settings. Within that dictionary, a prefs dictionary can also be used to set further preferences. For instance, if testing needs to be done on an app that requires login, it may be helpful to use this chromeOptions dictionary:

caps['chromeOptions'] = {
    'prefs': {
        'credentials_enable_service': False,
        'profile': {
            'password_manager_enabled': False
        }
    }
}

Very simply, this prevents the "Do you want to save your password?" sort of dialog box from appearing in all screenshots of a test session after login.

Sauce Connect Proxy Client

All of this works great just as described... if the app being tested happens to be available on the public Internet. If that is not the case (and it probably isn't), SauceLabs provides the Sauce Connect Proxy to connect to your local app.

On Linux, for example, the proxy client can be installed like so:

wget http://ift.tt/2vPySJI
sudo mkdir -p /usr/local/bin/
tar xzf sc-4.4.9-linux.tar.gz
mv sc-4.4.9-linux/bin/sc /usr/local/bin/
sudo chmod +x /usr/local/bin/sc
sc --version
#Sauce Connect 4.4.9, build 3688 098cbcf -dirty

The sc command will make use of the SAUCE_USERNAME and SAUCE_ACCESS_KEY environment variables. When executed with no parameters, the proxy client will run through some initialization leading to the message, Sauce Connect is up, you may start your tests. From here the client will simply sit and listen for commands and the SauceLabs Tunnels page should show the client as active.

With all of this in place, tests against a local development server can now be proxied up to SauceLabs and run in a considerably more consistent environment!

This method significantly improved the test infrastructure for Timestrap and allowed us to refocus on development instead of testing.

Putting It All Together

We can bring all this together in one (admittedly somewhat complex) Python test file:

 0
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
import os
import threading
import time
import unittest
from http.server import BaseHTTPRequestHandler, HTTPServer
from selenium import webdriver
from selenium.webdriver.common.by import By


class TestHandler(BaseHTTPRequestHandler):
    def do_GET(self):
        self.send_response(200)
        self.send_header('Content-type', 'text/html')
        self.end_headers()
        self.wfile.write(b'<html><head><title>Python Selenium!</title></head>')
        self.wfile.write(b'<body><div id="main">Hello!</div></body>')
        self.wfile.write(b'</body></html>')


class TestRequest(unittest.TestCase):
    @classmethod
    def setUpClass(cls):
        server = HTTPServer(('127.0.0.1', 8000), TestHandler)
        cls.server_thread = threading.Thread(target=server.serve_forever, daemon=True)
        cls.server_thread.start()
        time.sleep(1)

        sauce_username = os.environ['SAUCE_USERNAME']
        sauce_access_key = os.environ['SAUCE_ACCESS_KEY']
        url = 'http://ift.tt/2exKDjK'.format(
            sauce_username, sauce_access_key)
        caps = {'browserName': "chrome"}
        caps['platform'] = "Linux"
        caps['version'] = "48.0"
        cls.driver = webdriver.Remote(command_executor=url, desired_capabilities=caps)

    @classmethod
    def tearDownClass(cls):
        cls.driver.quit()

    def test_request(self):
        self.driver.get('http://127.0.0.1:8000')
        self.assertEqual('Hello!', self.driver.find_element(By.ID, 'main').text)


if __name__ == '__main__':
    unittest.main()

TestHandler.do_GET() is a very simple method for http.server that returns the following HTML:

<html>
<head>
    <title>Python Selenium!</title>
</head>
<body>
    <div id="main">Hello!</div>
</body>
</html>

TestRequest.setUpClass() does three import things before running the tests:

  1. Establishes the HTTPServer instance.
  2. Starts the HTTP server in a thread (to prevent blocking).
  3. Establishes the WebDriver.Remote instance using SauceLab as the command executor.

TestRequest.tearDownClass() simply shuts down the web driver.

Lastly, TestRequest.test_request() is the single test in this "suite". It simply loads the test server index page and asserts that the text "Hello!" is present inside div#main (which it should be).

Let's give it a try! Remember to set the SAUCE_USERNAME and SAUCE_ACCESS_KEY environment variables, first:

export SAUCE_USERNAME={sauce-username}
export SAUCE_ACCESS_KEY={sauce-access-key}
python tests.py
#E
#======================================================================
#ERROR: test_request (__main__.TestRequest)
#----------------------------------------------------------------------
#Traceback (most recent call last):
#[...]
#selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"id","selector":"main"}
#  (Session info: chrome=48.0.2564.97)
#  (Driver info: chromedriver=2.21.371459 (36d3d07f660ff2bc1bf28a75d1cdabed0983e7c4),platform=Linux 3.13.0-83-generic x86)
#[...]

Oh no! What happened? The important bit in the traceback is this: Message: no such element: Unable to locate element: {"method":"id","selector":"main"}. For some reason, Selenium was not able to find the div#main element. Since this test ran in SauceLabs, the SauceLabs Dashboard has information and a replay of the test session which reveals... oh... SauceLabs was trying to access the local network (127.0.0.1) and we forgot to start the proxy client. Oops!

Let's try that one more time, this time starting up the Sauce Connect Proxy (sc) before running the tests...

export SAUCE_USERNAME={sauce-username}
export SAUCE_ACCESS_KEY={sauce-access-key}
sc &
#[...]
#Sauce Connect is up, you may start your tests.
python tests.py
#[...]
#.
#----------------------------------------------------------------------
#Ran 1 test in 6.026s
#
#OK

Note: Don't forget to kill the sc process with, for example, pkill -x sc.

Hooray! This time the test ran successfully because SauceLabs was able to use the proxy client to access the local test server.

This means that the local development environment can still be used for testing without having to deploy between tests.

There you have it. With a local test server up and running, getting consistent results from Selenium can be incredibly smooth and save many, many testing headaches as the code base and developer contributions expand (hopefully!).



from Hacker News http://ift.tt/YV9WJO
via IFTTT

Simplifying the Game of Ice Hockey

http://ift.tt/2gw7SLy

Ryan Stimson Simplifying the game of Ice Hockey Coach Tips and Drills Analytics

You often hear coaches talk about “working an opening” in many fluid sports. Soccer teams will probe down the right, through the middle, on the left, or through half-spaces in order to create advantageous situations for an attack. Basketball teams will run screens, pick-and-rolls, weaves, and other plays to get an open look for a teammate. Hockey teams will cycle the puck in the corner or work the puck back to the point to create time and space before attacking again.

If a team spends a good deal of time trying to find an opening, but are unsuccessful for extended periods of time, the players can become frustrated and settle for a low-percentage shot. This may happen seconds after fans, equally frustrated, start yelling “Shoot!” from the stands. A coach may even address his team’s performance by saying the team was “trying to make the perfect play” and the ubiquitous solution is to “simplify the game and get pucks to the net” or some variation thereof that includes skating: “We weren’t skating tonight” for example. Not just coaches, but players as well will think this. Goalies might love to hear this because they know the majority of shots coming their way might not be that dangerous.

But is this a valid strategy? Lots of times a coach may do this, enjoy a successful game, and then simply take credit for regression (if a team keeps shooting they will score eventually). In their minds they might believe that by simplifying the team’s game they more successful. It’s easy to be tricked by recency bias, and what a coach thinks may be helpful could actually be detrimental to their team’s performance.

Let’s do a simple exercise, shall we? Let’s say that your team loses 3-0 on a night where you outshoot the opposition 30 – 25. Your team is passing okay, but there are few times where that extra pass doesn’t connect, a player whiffs on a shot, or fumbles the pass. All told your team generates fourteen shots from multiple passes and sixteen from single passes. You, as coach, might be upset and frustrated at those lost opportunities and so you might tell your team to “simplify their game” and “get pucks on net” to create rebounds. We’ve all heard this before.

The next night you stress getting pucks on net and crashing hard, so now your players are firing away whenever they have the puck and aren’t looking to make extra passes as much. Now you take thirty-five shots, but the distribution is much different. Your players generate only six shots from multiple passes and the other twenty-nine from single passes. What is the impact?

Well, Looking at the data myself and others have collected over the last two seasons (a little over 100,000 shot sequences at 5v5 play in 1040 games), I’ve found that an extra pass before the shot leads to a scoring chance 5% more often than shot assisted by a single pass. This also explains why a team is more likely to score on a shot assisted by multiple passes than from a single pass.

In terms of shooting percentage, a shot on net assisted from a single pass in the offensive zone goes in about 7% of the time; a shot on net assisted from multiple passes in the offensive zone goes in about 11% of the team. You see this same bump for scoring chances as well.

Using those same figures I gave you above, we can calculate how many goals you might be expected to score based on how many passes were made before the shot was taken (this is very simple, but it helps to illustrate the point). So, in the first example, you’d be expected to score (14 x 0.11) + (16 x 0.07) = 2.7 goals. In the second example, you’d have (6 x 0.11) + (29 x 0.07) = 2.7 goals.

So, the added volume hasn’t necessarily translated to an improved performance. Your team has gotten more shots, yes, but the nature of those shots hasn’t really made your team perform any better, despite the team looking like they are doing more on the ice. In all likelihood, the team has squandered better chances with the encouraged “shot-first” mentality.

Inspire Connect Lead

Working an Opening

But simply passing more often doesn’t work for every team. Teams like Philadelphia and Los Angeles will make lots of passes in their zone, but these won’t always result in a scoring chance. Philadelphia’s emphasis on low-to-high puck movement and shots from the point isn’t a great one, something L.A. does as well. In fact, I diagnosed some of the Kings’ offensive issues last summer, only for the organization itself to realize it earlier this year.

So, what we want to look at is which team is more efficient at working an opening and generating a quality chance from inside the home plate area. Here are the top ten teams in our data set at generating these plays.

Ryan Stimson Simplifying the game of ice hockey Flyers Penguins

We quickly see a grouping of some of the more skilled forward groups in the league with many of these teams leading the league in scoring.

But Ryan, why not just look at scoring chances?

Well, the answer is behind the whole reason people like myself analyze hockey. While scoring chances are somewhat predictive of future scoring in this dataset, these specific types of passing plays are more predictive, which is what we’re really after. Any shot from the home plate area is good, but the situation preceding the shot has a huge impact in how dangerous that chance will be. Let’s look at a few examples to show the difference.

Evidence

One of the ways the Penguins rank so highly by this metric is due to how patient their players are with the puck and also how their forwards continually seek to exploit open ice in the offensive zone. There’s a lot less standing around than you see with other teams.

In this clip, we see Evgeni Malkin collect a puck below the end line, curl away from his defender, assess the situation, and then pass it back to the point. He then exchanges with Chris Kunitz in the slot as the puck is held at the point, letting the play develop. Malkin then gets open near the right faceoff dot for a one-timer. Had this puck been rushed by the player at the point or had Malkin not continued through the slot, this quality chance doesn’t get created.

Here the Penguins show why they are the most deadly team on the rush in the league. Rather than make a pass upon entering the zone, the puck carrier holds it until the defender steps out to challenge, leaving an open lane to the player attacking the net. From there, it’s a touch pass to for a shot that should have been converted had it been anyone else besides Tom Sestito.

This is the advantage of playing below the goal line. Kris Letang sends this puck behind the net for Chris Kunitz to pick up. Kunitz takes a quick peek out front before collecting this puck and he knows right where to go with it. The defenders are caught because they’re all staring at the puck. It’s a little unfortunate that the pass wasn’t better or the shooter was right-handed, but this play will result in a good chance more often than not.

Now let’s look at a team that is near the bottom in these types of plays: The Philadelphia Flyers.

The Flyers win the puck and then pass to the point to Mark Streit. Rather than take a second to see what options are available, Streit simply fires away. He could either pass it off to Shayne Ghostibere or reverse it to the forward coming out in support. Both would get around the defender in his face and offer better options.

A similar situation unfolds here. The puck comes back to the point and is mindlessly blasted away without a moment to consider better options. The Flyers recover the puck and then proceed to do it all again. The Flyers run one of the worst offensive schemes in the entire league, with a huge percentage of their shots originating on passes back to the point.

Philadelphia Flyers Ryan Stimson Simplifying the Game of Ice Hockey

A better play in this first still from the previous clip would be to switch to the left defenseman (9PHI). That creates a better shot opportunity, allows for a pass inside to the Flyer in the right faceoff circle with inside position on opposition, or take the puck himself to the net for a closer shot. Had he played the pass inside and advanced on goal, the Flyers would have an advantageous situation and could isolate the defenseman the slot.

With how many games teams play and how little time there is to practice and scout your opponents except for the playoffs, something I wrote about here, it can be difficult to offer specific, detailed game plans night in and night out. Small changes like focusing on making that extra pass is something coaches can spend time on with the players and get the message across quickly – it’s not a new system that players have to learn, but rather prioritizing skills and decision-making ability on the ice. Teaching tactics and not systems is not only more economical given the lack of time available to teams, but it doesn’t box players into rigid play; they can be creative on the ice while keeping in mind a few, solid principles.

So, maybe the fans in the stands should have a little more patience and shout “Pass!” instead of “Shoot!” next time.


Access to our entire library of videos from our annual TeamSnap Hockey Coaches Conference. You can cancel any time, although after joining a community of coaches from all over the world using the videos on a daily basis to pick up new tips and stay relevant, we doubt you will.

Sign up now!

The Coaches Site Ice Hockey Videos

See Also

The post Simplifying the Game of Ice Hockey appeared first on Ice Hockey Coaching Tips & Drills.



from Ice Hockey Coaching Tips & DrillsIce Hockey Coaching Tips & Drills http://ift.tt/29iJqEN
via IFTTT

Google is losing allies across the political spectrum

http://ift.tt/2eI1UU3

Google Executive Chairman Eric Schmidt.

reader comments 0

Eight years ago, Google was on top of the world. People across the political spectrum saw the search giant as a symbol of high-tech innovation. During the just-completed 2008 presidential campaign cycle, candidates as diverse as Ron Paul, John McCain, and Barack Obama had all made pilgrimages to Google's Mountain View headquarters to burnish their reputations for tech savvy.

Even better, Google soon had a close relationship to the newly elected president, Barack Obama. "Google was riding high on the fact that Eric Schmidt was campaigning for Obama," said Siva Vaidhyanathan, a media studies professor at the University of Virginia and a longtime Google critic. "There was a lot of attention paid in the press to the fact that Googlers were starting to work in the White House."

With so many Googlers in government, Google had an outsized influence on policymaking during the Obama years. But today, Google is in a different situation. Most obviously, Schmidt worked hard to get Hillary Clinton elected president, and Clinton lost.

The issues don't end there. Given Silicon Valley's liberal views on social issues and Schmidt's love for Democratic politicians, it was probably inevitable that conservatives would sour on the search giant. But the larger problem for the search giant is that the company has been losing support among Democrats as well.

A growing number of liberal thinkers believe that the concentration of corporate power was a major problem in the American economy. And few companies exemplify that concentration more than Google.

That's the real significance of this week's decision by the New America Foundation, a think tank that's heavily funded by Google, to fire the head of its Open Markets project. For the last eight years, the Open Markets team has been methodically building the intellectual case for more aggressive enforcement of antitrust laws—a project that could easily result in more regulatory scrutiny of Google.

Google is in no immediate danger on that front. Republicans are still largely committed to a hands-off approach to economic regulation, Democrats are out of power, and Google still has plenty of allies in the Democratic Party.

But the longer-term trajectory here could be ominous. The combination of Bernie Sanders-style populism on the left and Donald Trump-style populism on the right could lead to a future where Google faces hostility from policymakers across parties.

"There's been a really big breakthrough," says Barry Lynn, who led New America's Open Market's team before New America fired him. "It's not just the left. Interest in dealing with concentration of power, the fear of concentration of power is across the spectrum."

Conservatives are increasingly hostile to Google

Conservative skepticism of Google goes back to the early years of the Obama administration. At the time, Vaidhyanathan was working on a book criticizing Google that came out in 2011. While promoting the book, he said, he kept getting invited on talk radio—what he describes as "angry white guy shows."

"For a couple of weeks they were really interested in Google and their relationship with Obama," Vaidhyanathan told Ars. "And it turned out that Glenn Beck had done one of his chalk board drawings connecting George Soros to Eric Schmidt and Sergey Brin."

Vaidhyanathan is generally a Google critic, but he found himself in the unusual position of defending Google against unfounded conspiracy theories. Still, there really was a close relationship between Google and the Obama White House. And that relationship—and Schmidt's subsequent support for Hillary Clinton—started to drive a wedge between Google and grassroots conservatives.

Conservative skepticism of Google has only intensified in 2017. The high-profile August firing of James Damore was one key moment here. Damore wrote a controversial memo suggesting that Google's gender gap might be explained by women having less interest in or aptitude for software engineering, and the former employee argued that Google was becoming an "ideological echo chamber" where right-of-center views weren't welcome.

When Google terminated Damore, many conservatives argued that Google proved Damore's point. Conservative critics believed that Damore's arguments should have been taken seriously within Google and that Google was essentially signalling conservative viewpoints were not welcome at Mountain View.

Another flashpoint came later in the month, when Google canceled the domain name of the neo-Nazi site the Daily Stormer and booted a right-wing Twitter competitor called Gab from the Android app store. While few conservatives have sympathy for Nazis, conservatives worry that similar reasoning could lead to censorship of more mainstream speech by Google and other technology giants.

That has led to the unusual spectacle of conservatives calling for government regulation of a major American company. "Since it has the power to censor the Internet, Google should be regulated like the public utility it is, to make sure it doesn't further distort the free flow of information to the rest of us," Fox News host Tucker Carlson argued.

"The evidence of Silicon Valley’s hostility to the Right is everywhere," wrote Jeremy Carl, a researcher at the conservative Hoover Institution. Like Carlson, Carl made the case for treating Google and other Silicon Valley companies like public utilities.

To be clear, this is still very much a minority view on the right. Most conservative policy experts still favor the deregulatory point of view Republicans have advocated since the Reagan years.

Frank Pasquale, a legal scholar at the University of Maryland, points out that despite using some anti-monopoly rhetoric on the campaign trail, Donald Trump has relied more on on orthodox free-market conservatives in the White House. Former Federal Trade Commissioner Josh Wright, a skeptic of strict antitrust enforcement, served on the Trump transition team. And Trump's choice to lead the antitrust division of the Justice Department, Makan Delrahim, is expected to enforce antitrust law less aggressively than his Obama administration predecessors. While candidate Trump sometimes hinted he would declare war on major technology companies, there's been no sign so far that he's going to follow through on those promises.

But over the longer term, political rhetoric has a way of transforming into political action. If hostility toward Google and other Silicon Valley giants becomes widespread among conservatives, sooner or later Republican politicians will find ways to capitalize on that. The next Republican president might campaign on the same kind of anti-monopoly rhetoric Trump did but actually follow through with it in office.



from Hacker News http://ift.tt/YV9WJO
via IFTTT