To see more from Mark Zuckerberg on Facebook, log in or create an account.
from Hacker News http://ift.tt/YV9WJO
via IFTTT
To see more from Mark Zuckerberg on Facebook, log in or create an account.
The Crypto market is gaining lots of steam.
Gravity-defying price rallies…
…and multi-million dollar token sales are commonplace, as are front-page headlines from traditional news outlets discussing Ethereum, Bitcoin, ICOs, tokens, hard forks, and other technical topics.
Even my 13-year-old brother has been calling me up asking for explanations!
I’ve been personally invested in this space for a while now — most recently as an engineer for Coinbase — but even I’m surprised by how quickly the Crypto space has evolved in the past six months.
If you want to understand why crypto is getting the spotlight, you have to understand the behind-the-scenes catalysts driving the market. Right now, that catalyst is the “token sale” or “Initial Coin Offering (ICO)” phenomena.
What the heck is an ICO anyway?
You may have heard of an “Initial Public Offering” (IPO) — when a company goes public by selling some of its shares to institutional investors, who in turn sell to the general public on the securities exchange. The public gets excited about IPOs because they let anyone with a brokerage account purchase shares of companies like Snapchat.
Are ICOs the same thing? Yes and no. IPOs and ICOs are both used by companies to raise capital. The main (and really important) difference is regulation. IPOs are regulated by the SEC and have a set of legal requirements and a formal process for how they’re carried out. ICOs are currently unregulated and more of a “wild west” practice.
Overall, there seems to be a lot of confusion and uncertainty when it comes to ICOs. Some argue that they have turned into a “perverse and unsustainable Keynesian beauty contest.” Supporters are optimistic and claim that it’s a new form of Venture Capital.
With drastically opposing viewpoints like this dominating the conversation, most of us are left on the sidelines scratching our heads.
You can’t understand ICOs without understanding the underlying digital asset sold in an ICO.
If you already know the basics of crypto, feel free to skip this section. For the rest of us… let’s start from the top!
Bitcoin is a decentralized digital currency that uses a peer-to-peer technology.
Peer-to-peer essentially means that there isn’t a central authority issuing new money or tracking transactions. Instead, these operations are managed collectively by the network. The transactions happen between users directly and are recorded on the blockchain (more on that below).
The Internet is filled with great Bitcoin explainers, so I won’t delve much farther down the rabbit hole in this post. Instead, here are some starting points to get you up to speed:
Bitcoin Wiki, Wikipedia, What is Bitcoin, Bitcoin Magazine, Why Bitcoin matters.
A blockchain is a distributed public database that keeps a permanent record of digital transactions.
In other words, it’s a logfile storing an immutable record of all the digital transactions. This distributed database is not controlled by a central administrator, but instead is a network of replicated databases (meaning each node in the network stores its own copy of the blockchain) that is shared and visible to anyone within network.
Every “block” in this blockchain contains a record of recent transactions, a reference to the block that came immediately before it, and an answer to a difficult mathematical puzzle, among other things.
A blockchain is collectively maintained by “miners”, who are members within the network that compete to validate Bitcoin transactions in each block by solving the complex algorithmic problem associated with the block.
They do this by buying or renting lots of computing power to run these complex algorithmic problems on. The incentive for them to use their computing power to verify transactions is that they are rewarded with Bitcoin if they solve the problem and validate a Bitcoin block.
The power of such a decentralized network is that economic value and governance are distributed among the network’s stakeholders (i.e. miners and consumers) rather than concentrated in a single organization (e.g. banks, governments & accountants). Thanks to this setup, anyone can own and transfer assets digitally without the need for a third party.
Blockchain technology isn’t limited to Bitcoin. It can be used to create any other cryptocurrency, such as Ethereum and Litecoin, which utilize their own blockchains.
You can read more about bitcoin and blockchain at Wikipedia or watch this great explainer video.
Next, we have the protocol layer. In general, a protocol is the special set of rules that nodes in a network use when they transmit information. These rules specify the interactions between the communicating entities.
One example of a protocol used in telecommunications is Transmission Control Protocol (TCP), which is a set of rules for exchanging messages at the information packet level on the internet. TCP guarantees that the data packets will be delivered and that they will be delivered in the same order in which they were sent. Another example of a protocol is Internet Protocol (IP), which is a set of rules to send and receive messages at the Internet address level — it essentially specifies the format of the data packets on the internet and the addressing scheme.
When discussing blockchains, the term “protocol” refers to the “cryptoeconomic rules” that are enforced by a blockchain in order to maintain distributed consensus across the blockchain’s peer-to-peer network.
Cryptoeconomic rules are rules that govern a decentralized digital economy that:
(i) uses public key cryptography for authentication
(2) has economic incentives to ensure that the rules are followed
For example, in the case of Bitcoin’s blockchain, it has financial incentives that are provided to the miners for validating every Bitcoin transaction and in turn, securing the network.
What exactly are these financial incentives?
Enter tokens.
The financial incentive for miners comes from the native token built on top of the Bitcoin blockchain — Bitcoin. The coin serves as a “carrot and stick” — miners who use their computing power to validate transactions are rewarded with a certain amount of coin.
In general, when you hear the term “cryptocurrency tokens” or simply “tokens”, they are referring to tokens such as Bitcoin that are built on top of a blockchain and represent a digital asset which you own and can transfer to someone else.
There are various ways to create tokens on top of a blockchain. For example, the simplest tokens to understand are intrinsic tokens like Bitcoin, which is directly built on top of the Bitcoin blockchain. Or you can choose to fork the Bitcoin blockchain and build tokens on top — some examples include ZCash, Litecoin, Manero, and others. Or you can build an entirely new blockchain technology and build a token on top of that — which is what Ethereum did. The token on top of Ethereum’s blockchain is “Ether”.
…you can even build tokens on top of Ethereum’s blockchain itself. Gnosis (GNO) and Augur (REP) are examples of this. Perhaps confusing since “Ether” is the intrinsic token built on top of the Ethereum blockchain. I’ll explain later in the post. For now, just accept the fact that it’s possible to build other tokens besides the intrinsic token on the Ethereum Blockchain.
There’s a helpful analogy here with traditional currencies — you can think of tokens as the currency itself (e.g. USD, EUR, etc.) and the blockchain protocol as the monetary policy.
The main takeaway here is that every token is based on some underlying blockchain — whether it’s Bitcoin’s blockchain, Ethereum’s blockchain, or some other forked/new blockchain.
Regardless of the cryptocurrency in question, tokens are valuable because the blockchain provides a backbone for asset manipulation that is immutable, decentralized, and impossible to counterfeit.
So far, we’ve learned about Bitcoin and the underlying blockchain that enables it. We’ve also learned about the protocol that determines the rules of the blockchain, and the tokens built on top of it.
Together, these technologies have made us rethink our definition of money as something that is digital, easily transferrable, secure, and decentralized.
But the important part to realize is that money is just one application of the blockchain. Besides money, the reason so many of us in the crypto-world are nerding out about the blockchain is because it has revealed a potential future for (1) protocols and (2) applications in general.
(1) Procotols
The ultimate dream of cryptocurrency developers is that we can take advantage of this blockchain technology to build new and improved communication protocols from the ground up. Protocols being developed for cryptocurrencies have the potential to solve problems with centralization that have plagued the Internet since the first dial-up modem whirred and beeped into action.
What are examples of such protocols?
Well, they could include protocols for payments, identity, domain name systems, cloud computing, reputation systems, and much more. Many of these systems today are highly centralized (e.g. Stripe, Paypal, Google, Amazon) and there’s no such thing as defaults or standards for these things on the Web.
Hence, in the long term, our hope is that the blockchain technology will enable decentralized, open, and secure protocols to be built with use cases far outside cryptocurrency.
(2) Applications
Blockchain enables what we call “decentralized applications”.
Decentralized application, or “dApp”, is an application built on top of the blockchain. How does that work?
Let’s consider the Bitcoin blockchain as an example. Bitcoin uses a scripting system for transactions that occur on the Bitcoin blockchain. A script is a simple list of instructions. So the Bitcoin’s scripting language enables us to write a script that is recorded with every transaction. The purpose of the script is to define the requirements the recipient must meet to gain access to the the Bitcoins being transferred.
For a typical Bitcoin transfer, the script will define what the spender must provide:
But the neat thing is that there’s some flexibility in the parameters we can send with each transaction. For example, we can write a script that says “this transaction is only valid if it has two private keys”. So essentially, this scripting language now lets us encode rules for how to move money, or more generally, any piece of information, around, without requiring us to trust some third party to follow a set of rules we care about. We simply trust the code and all is well.
Because Bitcoin has this scripting language, it’s possible to use this language to build certain types applications that transact on the blockchain. In other words, we can build applications that use Bitcoin transactions to communicate.
For example, let’s say we want to build a blockchain-based crowdfunding application. You might have a set of rules for how funds are transferred (or communicated) between one party to another which you encode in the scripting language. Then users of the application can run a crowdfunding event that is governed by the blockchain.
This is the main idea behind dApps: a decentralized set of rules that define a specific application. This set of rules sits on a public and decentralized blockchain (instead of a central server owned by some large entity, such as Facebook or Amazon). This enables it to governed by autonomy and be resilient to censorship.
Many of us in the crypto world were under the impression that developers would immediately hop on the bandwagon and use Bitcoin’s scripting language to build decentralized applications on top.
But fast forward eight years (Bitcoin was released in 2009), and Bitcoin has yet to become more than simply a store of value and a speculative investment.
Sure, we’ve seen a handful of wallets and exchanges built. (Coinbase, Kraken, Poloniex, and GDAX, to name a few.)
…And of course, we can’t forget Silk Road, the digital anonymous drug marketplace that processed over $1 billion in sales in 2.5 years and was shut down by law enforcement in late 2013.
In some ways, Bitcoin could be considered the first decentralized application since it runs on blockchain technology, is fully open-source, and runs without a central authority.
But seriously, a lot of us are here still looking around and wondering, “Where are the killer apps?”
Sadly, almost no one I know uses blockchain-based applications in their day to day.
Here are some factors holding these applications back from popularity (note: there’s are my personal opinions):
Programming applications using Bitcoin’s scripting language is not easy. Why?
For one, the scripting language is too limited. A scripting language is a programming language where you can write code to perform some actions. An example of a scripting language widely used on the web today is JavaScript.
const greeting = (name) => "Hello, " + name + "!";
const add = (a, b) => a + b;
const subtract = (a, b) => a - b
Compare this to Bitcoin’s scripting language:
OP_DUP OP_HASH160 62e907b15cbf27d5425399ebf6f0fb50ebb88f18 OP_EQUALVERIFY OP_CHECKSIG
The JavaScript on top reads pretty much like English. Bitcoin’s scripting language, on the other hand, looks like machine code. Most developers are used to writing in expressive languages like JavaScript, Ruby or Python… not machine code. Bitcoin script is daunting for most developers.
Secondly, developer tooling and great documentation goes a long way in gaining adoption among developers. Take React, for example, which is one of the most popular front-end libraries today. One of the biggest reasons that React became so popular is because of how much effort the community has put into building a strong set of developer tools (e.g. IDEs, Babel, Webpack, boilerplates, Create React App, etc), documentation, and tutorials. Bitcoin’s ecosystem, on the other hand, is the opposite of user-friendly.
Lastly, Bitcoin’s scripting language is not turing complete. A turing complete programming language is one that can be used to simulate any single-taped Turing machine. In other words, it can be used to solve any computation problem that a Turing machine can run given enough time and memory. (for more on this, read this Stackoverflow discussion). By not being turing complete, Bitcoin script restricts what you can do.
Overall, Bitcoin’s scripting language has historically been limited, difficult to use and lacked adequate tooling and documentation. As a result, it didn’t encourage a developer community to form, which is the prerequisite to killer applications.
Many of the applications we use in our daily work (marketplaces, exchanges, social networks, etc) derive their value from their strong network effects. A network effect is when a product or service increases in value as more people use it.
A classic example is Facebook. Every new user connecting to other users on the platform non-linearly increases the number of connections. Similarly, Venmo is useless if you’re the only person on the platform. For every new friend that joins, the value of the product goes up because you can now pay and/or receive payment from this friend.
Network effects help build better products and services. However, building up this network is one of the hardest parts of building a successful product, classically known as the “chicken and egg” problem.
So even if a developer were to make the effort to build a decentralized crowdfunding platform on top of Bitcoin’s blockchain, getting users on both side of the platform (i.e. investors and product-builders) is an incredibly hard challenge.
The blockchain provides the technological underpinnings to create decentralized applications, but it doesn’t provide the framework or tools necessary to drive adoption of the network.
When we talk about decentralized applications built on top of the blockchain, we might think of transaction-based platforms, such crowdfunding, remittances, payments, coupons, etc. It might be a neat technical feat to have a decentralized version of these types of services, but the reality is, we already have existing apps that work perfectly fine for each of these use cases.
For crowdfunding, we have Kickstarter. For remittances, we can use TransferWise. For payments, we can use Credit Cards, Paypal, Venmo, Square, etc.
Peter Thiel’s 10x rule is important to think about when we’re considering how to get users to substitute existing solutions with the new decentralized ones. As of now, it’s unclear what dimension these 10x advantages come from, so far as users go.
Take WeiFund, for example, which is a decentralized crowdfunding platform. As a user, WeiFund's interface and user experience seems similar to conventional crowdfunding platforms such as Kickstarter or GoFundMe. The main differences seem to be that they claim to have lower costs and that they use smart contracts to run the crowdfunding, allowing for more complex agreements. Is this enough to get users to make the effort to switch over (especially when the costs aren't that much lower)?
By no means do I believe that decentralized applications have no benefits. In fact, I foresee a future where applications are 10x more secure, 10x cheaper, 10x more efficient, or 10x more on some dimension than the current ones.
The point is that these benefits have not been proven yet, so there's little reason for users to consider using a decentralized application today.
Enter Ethereum.
Ethereum is a cryptocurrency launched in 2015 and built from the ground up using its own blockchain technology. It was designed to be a more generalized protocol than Bitcoin’s blockchain, with the explicit goal of doing more than just creating and recording transfers of a blockchain network’s native tokens.
As written in the Ethereum white paper:
“The intent of Ethereum is to create an alternative protocol for building decentralized applications, providing a different set of tradeoffs that we believe will be very useful for a large class of decentralized applications, with particular emphasis on situations where rapid development time, security for small and rarely used applications, and the ability of different applications to very efficiently interact, are important. Ethereum does this by building what is essentially the ultimate abstract foundational layer: a blockchain with a built-in Turing-complete programming language, allowing anyone to write smart contracts and decentralized applications where they can create their own arbitrary rules for ownership, transaction formats and state transition functions.”
In essence, Ethereum is simply a transaction-based state machine: we begin with a “genesis state” and incrementally execute transactions to transform it into some final state. The final state is what we accept as the canonical version of the the current state of the world of Ethereum.
While Bitcoin is the intrinsic token for Bitcoin’s blockchain, Ether is the intrinsic token for Ethereum’s blockchain.
Just like Bitcoin, the Ethereum blockchain contains a log of transaction-like events. Users send Ether to one another using the “log,” and miners are incentivized to verify and secure these transactions within the network.
But it can also go way beyond that — the Ethereum blockchain can be filled with a wider variety of event information coming from any sort of computer program.
Let’s look at a few of the core concepts that underly the Ethereum blockchain to understand why this is possible:
First is accounts. There are two types of accounts: Externally Owned Accounts and Contracts Accounts. Both account types have an Ether balance.
The main distinction is that contract accounts have some piece of code associated with them, while externally owned accounts do not. Contract accounts, therefore, have the ability to perform any type of computation when its associated code is executed.
Next we have what are known as transactions, which are cryptographically signed data packages that store a message to be sent from an externally owned account to another account on the blockchain. When a transaction is sent to a contract account, the code associated with the contract account is executed by the “Ethereum Virtual Machine (EVM)” on each node (more on that below).
Finally, there are messages. Messages allow contract accounts to call one another. When a contract account send a message to another contract account, the code associated with the account is activated. Essentially, a message is like a transaction, except it’s produced by a contract account rather than an external account.
Let’s quickly explain the concept of the “Ethereum Virtual Machine (EVM)”. Remember how we learned that the protocol for the Bitcoin blockchain determines how transactions on the network get verified? Well, in Ethereum’s case, every node that is participating in the Ethereum network runs the EVM as part of this verification process.
Let’s say we have a set of transactions that were started by some external accounts. These get accumulated into a block, and then the nodes in the Ethereum network go through the transactions listed in the block and run the code associated with these transactions within the EVM. It’s important to note that every node in the network runs the code and stores the resulting values. As you might guess, this tends to be computationally very expensive. To compensate for this expense and incentivize the nodes (or miners) to run these computations, the miners specify a fee for running these transactions. This fee is referred to as “gas” (you can read more on gas here). This is similar to how fees work in Bitcoin, where any fees attached to a bitcoin transaction go to the miner who mined the block that included the transaction.
Note: This is a very high level description of how the Ethereum blockchain works and it certainly skips a lot of details for purposes of brevity. I’ll write more in-depth articles in the future.
Lastly, we have Ethereum’s programming language for writing executable distributed applications and contracts. Unlike Bitcoin, Ethereum’s programming languages (Solidity for those who like Javascript , Serpent for those who like Python) don’t look like machine code. It has the expressive power and functionality of languages that programmers are accustomed to developing on, like JavaScript or Python. Moreover, it lets you do pretty much anything an advanced programming language would let you do. Hence, it is “Turing complete”.
The key takeaway from all this is that Ethereum stepped into the crypto-world and provided us with a generalized framework for running any type of code on the blockchain more easily. Because Ethereum’s language is turing complete, stateful, and developer friendly, the hope was to open up the benefits of the blockchain beyond just enforcing one particular ruleset (e.g. how digital money gets transferred) and enable a safe, open, highly available, autonomously governed, efficient, trustable and reliable mechanism to build any ruleset on top. This would enable developers to develop any type of application imaginable.
An example of an application that is incredibly simple to build on Ethereum is a “smart contract”. A smart contract is a distributed contract that is represented in code and basically says “if this happens then do that”. Just like regular contracts (e.g. a property lease or an employment agreement), they are used to form agreements with people or entities, but unlike regular contracts, they act like autonomous agents that run entirely on the blockchain and remove the human out of the loop, making them automated, open, secure and trustless.
Another example of an application is a decentralized organization. A decentralized organization is a programmatic organization that runs based on rules encoded within smart contracts. So instead of the typical hierarchical structure of an organization that is managed by humans, a decentralized organization encodes all its rules into a smart contract and then is completely managed by a blockchain.
Despite the fact that Ethereum has made it easy for us to now build applications on the blockchain, let’s admit it, most of us, even including us crypto-nerds, are still living in a world where we don’t use decentralized applications in our day to day.
Why is that?
To explain, let’s go back to my earlier hypothesis on why we ended up at the “where are the apps” problem, and see how Ethereum addresses each one.
Problem #1: Lack of developer friendliness
As we described above, Ethereum solves this problem by design through its expressive programming language and strong developer tooling.
Problem #2: Building up network effects is hard
With or without Ethereum, seeding and spinning the network effects is still a huge roadblock. Replacing existing network businesses who’ve built up huge networks effects is… as we said before, really HARD. If someone builds a decentralized Airbnb, they still need to convince both sides of the platform, the users and hosts, to come on board.
Problem #3: Doesn’t provide 10x improvement
We learned earlier that most users wouldn’t be willing to switch to a decentralized platform unless it’s 10x better than an existing solution on some dimension.
Just because it’s easier to build a decentralized application on Ethereum, doesn’t mean it provides the 10x experience we’re looking for. And so, the question we might ask is, are we still right back to square one? still stuck in the trenches?
Well, not really.
Because although Ethereum doesn’t directly solve the network effects problem, nor the 10x problem, what it does do is enable the creation of a whole new set of applications that were never possible before. The clearest way to make a 10x improvement is to invent something completely new. I believe Ethereum makes inventing something completely new possible by making it easy to build smart contracts.
Why the big deal about being able to build smart contracts?
Well, the beauty of being able to easily build smart contracts on Ethereum is that it enables anyone to easily build a new protocol on top of Ethereum. Remember that a protocol is simply a set of rules that nodes in a network use when they to transmit information. Smart contracts allow us to do exactly this — create an automated trustworthy set of rules between two or more parties.
Earlier, we mentioned how blockchain protocols have a intrinsic “token” associated with it, which is a digital asset that can be transferred between two users in the network without requiring the consent of a third party. In the case of Bitcoin’s blockchain, the intrinsic token is Bitcoin and in the case of Ethereum’s blockchain, the intrinsic token is Ether.
But just because the Ethereum and Bitcoin blockchain protocols have intrinsic tokens associated with it to drive the network, doesn’t mean a protocol built on Ethereum using a smart contract must have a token associated with it. Remember that the purpose of a protocol is simply to specify rules for communication between nodes.
So essentially, there’s two types of protocols:
For lack of better names, I’ll call the first kind “crypto-token-protocols” and the second kind “crypto-protocols”.
Now onto tokens.
Just like Ethereum makes it possible to build new protocols on top of its blockchain, it also makes it possible to use smart contracts to build new tokens on top of its blockchain. Let’s call these types of tokens “non-intrinsic tokens”.
In this regard, broadly speaking, we can think of a token system as just a database with one operation: subtract X units from A and give X units to B, under the condition that:
(i) A had at least X units before the transaction
(ii) The transaction is approved by A
Ethereum makes it especially easy to implement such token systems. More specifically, ERC20 token interface provides a standardized way to develop a token that is compatible with the existing Ethereum ecosystem, such as development tools, wallets, and exchanges.
What’s more, these non-intrinsic tokens can exist as:
Protocols? Tokens? Protocols + Tokens? Why does this matter?
Let’s take a look.
Launching a new cryptocurrency blockchain is not easy — it requires a massive bootstrapping effort in order to assemble the resources needed to get it up and running. But in the case of Ethereum, its intrinsic tokens were used to spin up their blockchains — in order to kickstart a large network of developers, miners, investors, and other stakeholders, Ethereum created some Ether tokens and launched a presale of these tokens to the general public. It then used these funds to develop its blockchain.
Ethereum was not the first to do this. In 2013, when Ripple started to develop it’s Ripple payment system, it created around 100 billion XRP token, and sold these tokens to fund the development of the Ripple platform.
This concept of fundraising via a token sale is sometimes referred to as an “Initial Coin Offering”, or ICO. But the structure of this token can vary significantly (as we just saw in the previous section), whereas the term “ICO” makes it sound a lot more official and like an investment security, so let’s stick to “token sale”.
A token sale is when some party offers investors some units of a new cryptocurrency (i.e. token) for a certain price, that can then later be exchanged with other cryptocurrencies (i.e. tokens). The idea is that investors buy into these tokens, and the the units of the token are fungible and transferable on cryptocurrency exchanges (e.g. Bitfinex, GDAX, Liqui, etc.) if there is demand for them.
While most token sales in the past have been restricted to building a new cryptocurrency (e.g. Ethereum, Ripple, etc), the smart contracts of Ethereum are now enabling startups to also to use token sales to fund development of various protocols and applications built on top of existing blockchains.
Before moving on, one important distinction to make is the difference between an application and protocol.
Application vs. protocol
An application can be built on one or more protocols. One example is Augur, which is a decentralized prediction markets application that is built on top of two protocols:
The decentralized oracle protocol is a “crypto-token-protocol” that has financial incentives to drive the network to form consensus around the outcomes of real-world events using Augur’s reputation tokens (REP). The exchange protocol, on the other hand, is a “crypto-protocol” and does not have a token associated with it to drive financial incentives, but instead is a set of rules defined between buyers and sellers in order to move tokens between each other.
But neither of these protocols need to be tied to a single application. Any application can in theory build on top of these underlying protocols.
Token sales for protocols vs. applications
Earlier, I mentioned how token sales can be used to drive development of a new protocol and/or to drive development of a new application.
So in essence, a team can use an token sale to fund:
So, pretty much anything :)
The last one is interesting, because to do a token sale, the application doesn’t even need to be built under a protocol. I can build non-profit organization and use tokens as a mechanism to fund the project. In this sense, a token sale simply becomes a new way to fund a traditional centralized application. A plain old crowdsale.
Okay, so investors buy these tokens and then what happens?
Depends. When a token is tied to a cryto-token-protocol, they look much more like intrinsic tokens like Ether and Bitcoin and are used to drive the development and network of a protocol. But when they are not, tokens simply represent something much more general. In fact, these tokens are flexible enough to represent a lot of different things.
For instance, let’s say I want to build a decentralized storage service. I can build a storage protocol using smart contracts which serve as agreements between a storage provider and their client, defining what data will be stored and at what price.
I would then build a token for this protocol and do a token sale. If the protocol becomes widely used, then the protocol becomes more valuable, which in turn could increase the value of the token. Moreover, as a developer of this service, I could choose to make the tokens represent purchase rights to the services provided in the application.
What’s important to note is that, broadly speaking, the mechanism for creating tokens are so flexible that they can represent lots of different things:
There’s a handful of projects which have successfully raise funds via a token sale, including as Augur, Antshares, Melonport, Gnosis, Antshare, Gnosis and many more. I’d suggest you read their respective white papers if you want to learn more.
We’re at a point where Ethereum has made it easy to not only build protocols that can power decentralized applications, but also to help get a network off the ground. Ethereum does this in two ways:
1. Money
This one is obvious. As we already saw, a token sale now enables developers to easily release tradable tokens to raise funds for building a protocol and/or application. Using this money, the team could choose to invest in sales, marketing, etc. to drive the network.
2. Users
This is the more interesting piece of the puzzle. Protocols and decentralized applications can solve the network effects problem by using a token sale as a mechanism to get early contributors and adopters. Early adopters who believe in the protocol or application have an incentive to buy the token because there is potential for that token to be worth more in the future.
So in essence, tokens could help bootstrap a network of early adopters because the incentives of the early adopters and the development team line up perfectly.
Let’s say you want to build a new file sharing protocol. You can launch a token sale through which you gain some early adopters, investors, and entrepreneurs who become interested in “buying in.” They might be simply speculating or they might truly believe in the product. At that point, they become stakeholders in the protocol itself and are financially invested in its success. Then some of these early adopters either become users of the products built on top of the protocol or build products and services around the protocol themselves, with the incentive to drive the success of the protocol further in order to increase the value of their tokens. As the protocol gains adoption, it increases the value of the tokens, which further draws more attention from more investors, application builders and users, which leads to more applications, and so on.
What Ethereum has done is create an incredibly flexible system to innovate at the protocol level and application level. We’ll likely see a lot of experimental and innovative protocols and applications being built over the coming years. Many of these will fail, just like a lot of startups fail. But over time, it’s likely that some core set of protocols and associated networks will successfully drive mainstream adoption.
Finally, once the protocols begin to take shape and standardize, we’ll see a whole host of decentralized applications being built on top.
Token sales are providing the fuel needed to drive development of protocols built on top of the blockchain, and to further drive developer interest in building applications on top of these protocols.
Of course, this isn’t the perfect happy ending.
For one, getting a bunch of early adopters isn’t enough. You also need to work hard to sustain the growth of the network effects, just like traditional businesses do. That means putting in years of hard work to building a useful application and driving adoption.
Secondly, another trend I’ve noticed is that most of the token sales we are seeing today are being used to drive network effects around specific applications rather than open and decentralized protocols. Since tokens are so flexible, dApp developers are creating tokens that are coupled to the dApp, instead of a standardized underlying protocol that can be shared among applications. This could lead to fragmentation in protocols.
Third, the initial growth of the token value is mostly driven by speculation (since it takes some time for the platform being built to become valuable). Hence, there will likely have high volatility. It’s unclear if and how we can mitigate this, and if we can even figure out a mechanism to get token prices to stabilize over time. Overall, there’s a lot of open questions around the viability of a token’s value over time. Ideally, we want the token’s value to be tied to the value of the protocol or application, similar to how a public company’s stock is tied to the company that issued it, or to represent some valuable digital right to a service. But as of today, the value of these tokens is still mostly speculation.
Fourth, the market for token sales incredibly frothy right now. Because securities regulations makes it difficult to sell tokens (which are unregistered securities) as equity (remember that a token can represent anything, including equity within the protocol or application), developers are not doing it. Instead, they are structuring them as crowdsales. While there are some highly respectable projects raising much needed capital in this manner without the hassle of regulation, there’s a long tail projects that are simply taking advantage of the high demand in the ICO market to raise millions of dollars in capital with very little to show for it — some of which have even turned out to be outright scams that absconded with the funds collected during the process. We want these crowdsales to benefit the groups of people gathering together to build a common public good, but not the scammers. How do we achieve that?
Besides these issues, there are still lots of unanswered questions that need to be figured out before token sales become a viable form of funding:
If these questions interest you, you’re in luck — I’ll write about some of them in upcoming posts!
I’ve tried my best in this article to articulate my views on token sales and clear up some of the confusion around blockchain development in general.
Talking about cryptocurrency and blockchain development is like trying to take a picture of a running cheetah. The space is moving at breakneck speed, and any attempt to pin it down results in a blurry picture. Regardless, I still believe it’s important that we educate the broader community on cryptocurrency topics.
If you feel that I’ve made any overreaching assumptions in this walkthrough, please share commentary below! I’d love to talk more and learn from each other.
We need everyone’s input to figure out the right path towards a healthy and sustainable cryptoeconomic future.
Authored by Tsvetana Paraskova via OilPrice.com,
The leading commodities trader among global investment banks, Goldman Sachs, is assessing the future direction of its commodities business, following the worst start to a year in more than a decade, Bloomberg reported on Monday, citing people with knowledge of an informal internal review.
The fate of the commodities business was one of the items on the agenda of a recent board meeting in London in late June, Bloomberg’s sources said on the condition of anonymity.
Goldman Sachs has not reached any decision regarding the unit, and may not be overhauling the commodities division. According to one of Bloomberg’s sources, it is a common practice for a bank to review the performance of divisions that are not doing very well.
In its Q1 2017 results release, Goldman Sachs said that
“Net revenues in Fixed Income, Currency and Commodities Client Execution were $1.69 billion for the first quarter of 2017, essentially unchanged compared with the first quarter of 2016, reflecting significantly higher net revenues in mortgages and higher net revenues in interest rate products, offset by significantly lower net revenues in commodities and currencies and lower net revenues in credit products”.
Goldman did not quantify then the “significantly lower net revenues in commodities”, but according to one of the people who talked to Bloomberg, weakness in the commodities business persisted after the first quarter, and the commodities division’s start to the year has been the worst in more than a decade.
“Commodities has been and still is an important business for our clients and we will continue to invest in it to ensure we are best meeting their needs,” bank spokesman Michael DuVally told Bloomberg in an emailed statement.
According to U.S. Senate report “Wall Street Bank Involvement with Physical Commodities” from 2014, Goldman Sachs’s “commodity revenues were generally under $500 million from 1981 until 2000, and then began to climb, producing four years of relatively high revenues, from 2006 until 2009, before they once more began to decline.” The peak in 2009 was at US$3.4 billion, said the Senate report, quoting a Goldman presentation from 2013.
According to one of Bloomberg’s sources, Goldman’s commodities revenue for 2016 was less than US$1.1 billion.
In the past year, the Inuit community of Tuktoyaktuk in the Canadian Arctic, perched on the edge of the Beaufort Sea, has had to move five houses and a warehouse away from the shoreline because they were threatened by erosion, according to Mayor Darrel Nasogaluak.
"One was an emergency," Chukita Gruben, the community's 22-year-old former climate change coordinator, told me over the phone. "The other ones were about to fall."
Tuktoyaktuk, in the Northwest Territories—its population is around 900 people—is grappling with the effects of climate change. Permafrost melt is liquefying the ground under its buildings and roads. Sea levels are rising. The ice is melting earlier, and freezing later, meaning more open water and more storms. All this is contributing to the erosion that's eating away at the coast.
"Climate change is something the community's living with daily," Nasogaluak told me, and Tuk, as locals call it, is moving to adapt as quickly as it can.
Canadians, at least in the south, can sometimes feel smug about climate change. We read about places like the Maldives or even Miami being flooded by the rising seas, and it's scary—but for many of us, this feels far off from our own reality. Yet Canada is being reshaped by the same forces. In the next century, our coastline will look much different than it does today. The western Arctic, southeastern Atlantic Canada, and Vancouver are on the front lines.
Guoqi Han is a senior research scientist in physical oceanography with the federal Department of Fisheries and Oceans. He studies sea level rise and how it will impact Canada. I phoned him recently in Newfoundland, where he's based.
Sea levels are rising for two reasons, he explained: melting land ice, and the expansion of seawater as it warms up. But what many people don't realize, Han continued, is that water levels aren't rising in a uniform way—they impact different communities differently. "What really matters for local communities is the relative sea level," he said.
Here's how the picture looks. By the year 2100, according to Han, Charlottetown and Halifax could see 50-70 cm sea level rise, on average. Vancouver can expect to see 40 cm, and Tuktoyaktuk will see a 50 cm sea level rise. "That's not the possible upper limit," he emphasized. The reality could be worse.
Meanwhile, in some places, like the eastern Arctic, which is close to the Greenland ice sheet, the sea level could actually drop a bit in years to come, according to Han.
How to explain the variation in sea level rise around the country?
Han traces it back to 20,000 years ago, when parts of Canada were covered in a massive glacier. The ice eventually retreated, but—in an effect known as post-glacial rebound—the land is still very, very slowly bouncing back where all this weight was once pressing it down. Parts of central Canada, around Churchill, Manitoba and the Hudson Bay coast, are rising by 10 mm per year, Han said. Areas around the Atlantic coast, including Halifax and Charlottetown, are sinking—about 1 or 2 mm per year. That makes them even more vulnerable to rising seas, as well as the increased hurricanes and storms brought by climate change.
Read More: Vancouver Considers Abandoning Parts of the Coast Because of Climate Change
Vancouver is a little bit different. It's in an active earthquake zone. There, tectonic movement plays a bigger role, Han explained—Vancouver Island is actually slowly rising, but if a major earthquake does hit, the picture could change dramatically.
Canada has recognized that adapting against the pressures of climate change will be important in years to come. As part of its federal budget earlier this year, it earmarked $2 billion for a climate disaster mitigation fund. Still, there's been plenty of criticism that various levels of government aren't adequately prepared for the challenges ahead.
City planners in Atlantic Canada and on the West coast, as in Tuktoyaktuk, are working to buttress their cities against the effects of climate change. Vancouver is working under the assumption that it could see 50 cm of sea level rise by 2050, and 1 m by 2100, Angela Danyluk, a sustainability specialist with the city, told me. The city's adaptation strategy includes changes to building codes and a public education campaign, and officials haven't shied away from a discussion around whether infrastructure should even be removed from the coast.
It will change the way Vancouverites live day-to-day. "The lifestyle along the coast will change," Danyluk said. "It will be a new normal," one that includes annual preparations for winter storm surges and possible flooding, and maybe "reduced recreation opportunities" along the water, at least during some parts of the year. "The seawall will perhaps need to be shut down more often," she said. "Certain areas will have to be modified."
Tuktoyaktuk, of course, is already moving some of its buildings away from the coast. "We've lost a good month of the ice season," Nasogaluak told me. "Our oceans are freezing two weeks late, and breaking up two weeks earlier"—meaning there's an extra month of open water.
To protect its inhabitants, "we've put a line in the community where no one can build," he continued. "We can't protect them if they do build in that area." Nasogaluak said that imposing this restriction was very hard on people. But it just isn't safe anymore.
"We're a coastal community. We have coastal cabins and hunting areas," Nasogaluak continued. "People have had to relocate their camps where, for generations, they've hunted and fished." He said the community has applied for more government funding to help it withstand the pressures it's already facing—which are predicted to be more extreme in decades to come.
"We're a community of people who are very adaptive," he told me. "We're not panicking about climate change and sea level rise, but we need assistance. We're doing all we can."
Get six of our favorite Motherboard stories every day by signing up for our newsletter.
Any resident in Florida can now challenge what kids learn in public schools, thanks to a new law that science education advocates worry will make it harder to teach evolution and climate change.
The legislation, which was signed by Gov. Rick Scott (R) this week and goes into effect Saturday, requires school boards to hire an “unbiased hearing officer” who will handle complaints about instructional materials, such as movies, textbooks and novels, that are used in local schools. Any parent or county resident can file a complaint, regardless of whether they have a student in the school system. If the hearing officer deems the challenge justified, he or she can require schools to remove the material in question.
The statute includes general guidelines about what counts as grounds for removal: belief that the material is “pornographic” or “is not suited to student needs and their ability to comprehend the material presented, or is inappropriate for the grade level and age group.”
Proponents of the new law say it makes the challenge process easier for parents and gives residents a greater say in their children's education. And state Rep. Byron Donalds (R-Naples), who sponsored the bill, told Nature in May that his intent wasn't to target any particular subject.
But Glenn Branch, deputy director of the National Council for Science Education, said that affidavits filed by supporters of the bill suggest that science instruction will be a focus of challenges. One affidavit from a Collier County resident complained that evolution and global warming were taught as “reality.” Another criticized her child's sixth-grade science curriculum, writing that “the two main theories on the origin of man are the theory of evolution and creationism,” and that her daughter had only been taught about evolution.
“It's just the candor with which the backers of the bill have been saying, 'Yeah, we’re going to go after evolution, we’re going to go after climate change,'" that has him worried, Branch said.
Based on the affidavits, it seems likely that the law will also be used to request the removal of library books that parents find objectionable.
The Florida statute is one of 13 measures proposed this year that Branch and his colleagues consider “anti-science.” In Idaho, the legislature rejected several sections of the state's new public school science standards related to climate change — the standards committee was asked to rewrite those sections and resubmit them for approval this fall. Alabama and Indiana both adopted nonbinding resolutions on teacher's “academic freedom,” which are generally understood as encouraging educators to “teach the controversy” around subjects like climate change.
“Whether it be evolution or the argument about global warming, we don’t want teachers to be afraid to converse about such things,” state Sen. Jeff Raatz (R-Centerville), a supporter of the resolution, told Frontline.
Similar measures in other states didn't make it into law, “but a number of them have advanced farther than we really expected,” Branch said. He called 2017 “a busy year” for this type of legislation.
In Florida, a group called Florida Citizens for Science urged people keep an eye on challenges to school instructional materials in the coming year.
“At this point the fight is at the local level,” the group's communication director, Brandon Haught, wrote in a blog post. “If you’re not there and willing to stand up for sound science education, then we’re done.”
Read more:
How to teach kids about climate change where most parents are skeptics
A political organization that doubts climate science is sending this book to 200,000 teachers
Perspective | What the latest assaults on science education look like
You probably know by now that Conor McGregor is going to make a lot of money for the feat of getting his ass kicked by Floyd Mayweather in a boxing match later this summer. Good for him, I guess. But if you harbored any doubts about his eventual fate, just watch this training video...which I think is meant to be intimidating, and is not a parody:
Seriously, that entire video is a series of erotic dance moves, followed by McGregor almost losing a sparring match to a heavy bag. For a contrast, watch Mayweather:
Floyd might actually kill him. Is it possible for the ref to stop the fight before it even begins?
On to the rest of the superlatives!
The Best Time To Not Tell The Truth, Or Better Yet, Not Speak At All: John McEnroe
For those who missed it, John McEnroe had the following exchange with an NPR reporter about Serena Williams in a story that was published earlier this week:
Garcia-Navarro: We're talking about male players but there is of course wonderful female players. Let's talk about Serena Williams. You say she is the best female player in the world in the book.
McEnroe: Best female player ever — no question.
Garcia-Navarro: Some wouldn't qualify it, some would say she's the best player in the world. Why qualify it?
McEnroe: Oh! Uh, she's not, you mean, the best player in the world, period?
Garcia-Navarro: Yeah, the best tennis player in the world. You know, why say female player?
McEnroe: Well because if she was in, if she played the men's circuit she'd be like 700 in the world.
A lot of people threw a fit about this, and McEnroe came out looking iffy—at best. It also invoked a predictable debate on the Internet and sports talk radio, and, as these things do, opinions broke down along polarized lines. Some said it was sexist to diminish Serena and all female athletes by bringing up the comparison to men, and some said he was just being honest. Serena told him (respectfully, somehow) to shut up.
For what it's worth, I have no idea where Serena would rank if she played on the men's tour. I don't care. And I don't think McEnroe was being outwardly malicious, nor do I think he would have made the remark if he wasn't led in that direction by the interviewer. Still, I want to make a point that might be helpful to public figures in the future:
You don't have to say more than is necessary, even if you're thinking it. Talking is voluntary. You control it!
When Garcia-Navarro asked him why he would qualify his statement, he could've said something like: "In terms of achievement, there's no qualification—she's the best of all-time." Which would be true! And then, if pressed, he could just point out that there are physical differences and leave it at that. No reason to quantify it, and I do think that by placing her 700th in the world, whether it's true or not, he inadvertently belittled her achievements, and it makes total sense why it might leave certain people upset.
It never had to happen, Johnny Mac—wise up and learn some diplomacy! I know it's a lot to ask for a guy who has inspired a thousand YouTube videos with the word "tantrum" in the title, but you're almost 60, and it's time to enter the "wise elder statesman" part of your career. Plus, as you continue to lose flexibility, it will be harder and harder to put your foot in your mouth, so you might as well stop trying now.
Dumbest Non-Story of the Week: The NBA and LeBron James
Look at the headline of this story on the ESPN front page: "Source: LeBron not actively recruiting for Cavs."
The story itself is as pointless as you might imagine—per some "league source" who clearly rivals Deep Throat for journalistic impact, LeBron is just kinda chilling out on the whole recruiting circuit. Instead, he's going to a wedding, and treating the offseason like an actual offseason.
Now, this might be a significant story if "recruitment" was an actual part of James' job. It's not, so it's really dumb to act like he's somehow derelict in his duties. Also, recruitment is completely unnecessary. If you're a basketball player who is considering signing with the Cleveland Cavaliers, here are the only three things you need to know
1. You will get to be teammates with the Best Player on Earth.
2. You will make it at least as far as the NBA finals from now until whenever the aforementioned Best Player on Earth decides to retire, because he's awesome and the rest of the conference is terrible (and only getting worse).
3. You have to live in Cleveland, which isn't great, but you'll have so much money that it won't be a big deal.
So if you're Zach Randolph, or someone like him, just sign with the Cavs. Trust me, it will be awesome, and you shouldn't need LeBron to play skee-ball with you at a Dave & Busters to figure that out.
Best Job of Hitting a Guy in the Butt with a Tennis Ball: Gael Monfils
There were so many nominees in this category, and narrowing them down to just one winner was probably the hardest thing I've ever had to do, but in the end, I have to give this one to Gael Monfils:
This Week's Big Loser of the Blame Game: Miguel Montero
Cubs catcher Miguel Montero was 0-31 in throwing out runners this season, and when reports asked him about his futility, he decided to throw his pitchers under the bus:
"It really sucked, because the stolen bases go on me. But when you really look at it, the pitcher doesn't give me any time, so yeah, 'Miggy can't throw anyone out,' but my pitchers don't hold anyone on...the numbers always go to the catcher, so I'm the bad guy there," Montero said. "It really sucks. Have to take full responsibility, but on the other hand, I would like a little help."
On one hand, he's probably a little bit right. On the other, 0-31 is terrible, and what's also terrible is publicly dumping on your teammates when times are rough. Nobody appreciates that, least of all management, and it didn't take long before the Cubs brought the hammer down:
The lesson: When you point a finger at someone, there are three more pointing back at you, and one of those fingers is probably named Theo Epstein, and is about to send you to Triple-A.
While Bob Woodward is more stoic in his public discussion of the predicament this nation finds itself in (lambasting the 'fake news' media rather than directing his ire at the body politik), his partner in un-crime, legendary Watergate reporter Carl Bernstein, called the Trump administration a "malignant presidency" on Saturday, and suggested that the wrongdoings committed by the White House were unprecedented.
Journalism legend Carl Bernstein on President Trump: "We have never been in a malignant presidency like this before" https://t.co/847szAUDhk http://pic.twitter.com/OBPUajeqzf
— CNN Politics (@CNNPolitics) July 1, 2017
Speaking on CNN, The Hill reports that Bernstein warned that the Trump administration is "not functioning," and appears to hint at a 'soft coup' amid the nation's deep state...
"We are in the midst of a malignant presidency," Bernstein said. "That malignancy is known to the military leaders of the country, it's known to the Republican leadership in Congress who recognize it, and it's known to the intelligence community."
"The presidency of Donald Trump is not functioning," he continued. "It's really not functioning because the character and capabilities of this president are called into grave question in a way that those that know him are raising serious concerns about."
While Bob Woodward warned the "smug" media against "hyperventilating" over Trump, Bernstein suggested Trump was the "greatest journalistic challenge of the modern era."
"To report on a malignant presidency, what it means, and where it's going," he said. "This president is not in control of the presidency in a way that it is functioning."
"That has got our leaders worried, they are worried about his character, they are worried about his temperament," Bernstein said.
"We are in foreign territory. We have never been in a malignant presidency like this before."
As a reminder, Woodward previously warned that it’s not in the interest of either the Trump White House or the media to war with each other.
"I think everyone has accelerated this work. The other question to ask, is there any justification for Trump and people like — in his White House responding this way? And the only justification I can think of, which really isn’t a justification, but it accounts for emotional spasm of, my God, this is enemy of the people, I know that reporters have talked to people in the Trump house, — Trump White House about very sensitive intelligence operations, that we find out about in the press. And I think Trump is horrified that this is out there. And these are not necessarily things that are going to be published, but Trump is a newcomer saying, my God how do reporters know about these things? And so it’s — we’ve got to stop it.”
“[I]t’s not in our interest, the media’s interest to have a war with the Trump White House. It’s not in Trump’s interest to have this war.”
Sadly, it appears it's too late to get this toothpaste back in the tube (for both sides).
Despite the massive venture investments going into healthcare AI applications, there’s little evidence of hospitals using machine learning in real-world applications. We decided that this topic is worth covering in depth since any changes to the healthcare system directly impact business leaders in multiple facets such as employee insurance coverage or hospital administration policies.
In 2016, national health expenditures were estimated at $3.4 trillion with a projected increase from 17.8 to 19.9 percent of the GDP between 2015 and 2025. Industry analysts estimate that the AI health market is poised to reach $6.6 billion by 2021 and by 2026 can potentially save the U.S. healthcare economy $150 billion in annual savings. However, no sources have taken a comprehensive look at machine learning applications at America’s leading hospitals.
In this article we set out to answer questions that business leaders are asking today:
This article aims to present a succinct picture of the implementation of machine learning by the five leading hospitals in the U.S. based on the 2016-2017 U.S. News and World Report Best Hospitals Honor Roll rankings. (While a respected industry source, we acknowledge that the Honor Roll ranking methodology may not fully represent the complexities of every hospital, we’re simply using the ranking as a means to finding a representative set of high-performing hospitals).
Through facts and figures we aim to provide pertinent insights for business leaders and professionals interested in how these top five US hospitals are being impacted by AI.
Before presenting the applications at each of the top five hospitals, we’ll take a look at some common themes that emerged from our research in this sector.
Judging by the current machine learning initiatives of the top five US hospitals, the most popular hospital AI applications appear to be:
In the full article below, we’ll explore the AI applications of each hospital individually. It’s important to note that most of the applications of AI at major hospitals are relatively new, and few of them have distinct results on their the improvements or efficiencies that these technologies allowed for. We tried our best to exclude AI use-cases that seemed more like PR stunts than actual genuine applications and initiatives.
In either case, you’ll see in the article below that we’re very clear about which applications have traction, and which have no results to speak of thus far. We’ll begin with the #1 ranked report in the Best Hospitals Honor Roll, the Mayo Clinic.
In January 2017, Mayo Clinic’s Center for Individualized Medicine teamed up with Tempus, a health tech startup focused on developing personalized cancer care using a machine learning platform. The partnership involves Tempura conducting “molecular sequencing and analysis for 1,000 Mayo Clinic patients participating in studies relating to immunotherapy” for a number of cancer types including “lung cancer, melanoma, bladder cancer, breast cancer and lymphoma.”
While currently in the R&D phase, the initial goal is to use the results of these analyses to help inform more customized treatment options for Mayo’s cancer patients. Mayo joins a small consortium of healthcare organizations in partnerships with Tempura including University of Michigan, University of Pennsylvania and Rush University Medical Center.
“The holy grail that we’re looking for, and that Tempus is actively trying to build, is a library of data big enough that these patterns become a therapeutic, meaning you can start to say, ‘People that have this particular mutation shouldn’t take this drug, people that have this particular mutation should take this drug’” -Eric Lefkofsky, Tempus Co-Founder and CEO
Tapping into the estimated $13.8 billion DNA sequencing product market, the startup apparently follows two compensation models depending on client type: Tempus charges hospital systems directly for their services and in the case of individuals or patients, the costs are billed to the insurance provider. Tempus CEO, Eric Lefkofsky is also co-founder of eCommerce giant Groupon and a handful of tech companies with analytics software leanings including Uptake Technologies and Mediaocean.
According to the CDC, cancer is rivaled only by heart disease which is the leading cause of death in the U.S. and in March 2017, Mayo Clinic in conjunction with medical device maker Omron Healthcare made a $30 million Series D investment in heart health startup AliveCor.
VIDEOKardio Pro, designed by AliveCor, is an AI-powered platform designed for clinicians “to monitor patients for the early detection of atrial fibrillation, the most common cardiac arrhythmia that leads to a five times greater risk of stroke.” Kardia Mobile, AliveCor’s flagship product, is a mobile-enabled EKG. Results of the Kardio Pro investment have yet to be reported.
In September 2016, Microsoft announced a collaboration with Cleveland Clinic to help the medical center “identify potential at-risk patients under ICU care.” Researchers used Cortana, Microsoft’s AI digital assistant, to tap into predictive and advanced analytics.
Used by 126 million Windows 10 users each month, Cortana is part of Microsoft’s Intelligent Cloud segment which increased by 6 percent or $1.3 billion in revenue according to the company’s 2016 annual report.
VIDEOCortana is integrated into Cleveland Clinic’s eHospital system, a type of command center first launched in 2014 that currently monitors “100 beds in six ICUs” from 7pm to 7am. While improved patient outcomes have been reported by William Morris, MD, Associate CIO, specific improvement measures have not been released.
The Microsoft-Cleveland Clinic partnership is focused on identifying patients at high risk for cardiac arrest. Vasopressors are a medication administered to patients in the event of a cardiac arrest. While part of a “pulseless sudden cardiac arrest management protocol,” vasopressors also raise blood pressure. Researchers aim to predict whether or not a patient will require vasopressors.
Data collected from monitored ICUs is stored in Microsoft’s Azure SQL Database, a cloud-based database designed for app developers. Data collection points such as patient vitals and lab data are also fed into the system. A computer model is built from the data that integrates machine learning for predictive analysis.
Currently in the early stages of its AI strategy, in April 2016 NVIDIA announced its affiliation with the Massachusetts General Hospital Clinical Data Science Center as a “founding technology partner.” The Center aims to serve as a hub for AI applications in healthcare for the “detection, diagnosis, treatment and management of diseases.”
Officially presented at the 2016 GPU Technology Conference, NVIDIA DGX-1 is described by the company as a “deep learning supercomputer” and was installed at Mass General (readers unfamiliar with GPU technology may be interested in our NVIDIA executive interview titled “What is a GPU?“). The NVIDIA DGX-1 reportedly costs $129,000.
With a hospital database comprised of “10 billion medical images” the server will be initially trained on this data for applications in radiology and pathology. The Center aims to later expand to electronic health records (EHRs) and genomics. If NVIDIA DGX-1 delivers on its promises it could mitigate some of the challenges currently facing the field:
VIDEO“If we can somehow seamlessly capture the relevant data in a highly structured, thorough, repetitive, granular method, we remove that burden from the physician. The physician is happier, we save the patient money and we get the kind of data we need to do the game-changing AI work.” -Will Jack, Co-founder and CEO of Remedy Health
In March 2016, Johns Hopkins Hospital announced the launch of a hospital command center that uses predictive analytics to support a more efficient operational flow. The hospital teamed up with GE Healthcare Partners to design the Judy Reitz Capacity Command Center which receives “500 messages per minute” and integrates data from “14 different Johns Hopkins IT systems” across 22 high-resolution, touch-screen enabled computer monitors.
A team of 24 command center staff receive is able to identify and mitigate risk, “prioritize activity for the benefit of all patients, and trigger interventions to accelerate patient flow.” Since the launch of the command center Johns Hopkins reports a 60 percent improvement in the ability to admit patients “with complex medical conditions” from the surrounding region and country at large.
The hospital also reports faster ambulance dispatches, 30 percent faster bed assignments in the emergency department, and a 21 percent increase in patient discharges before noon among other improvements.
VIDEOJohns Hopkins leaders recently convened in April 2017 for a two day discussion on how to leverage big data and AI in the area of Precision Medicine. Industry analysts estimate the global Precision Medicine market value at $173 billion by 2024 (see our full article on AI applications in medicine for more use-cases in medicine and pharmo).
While specific details have not been released, a talk on “how artificial intelligence and deep learning are informing patient diagnosis and management” was presented by three speakers including a VP for IBM Watson Health Group, Shahram Ebadollahi, and Sachi Saria, PhD, Assistant Professor of Computer Science.
Saria’s research on machine learning applications to improve patient diagnoses and outcomes was recently presented at the 11th Annual Machine Learning Symposium at the New York Academy of Sciences.
In March 2017 in Washington, D.C., UCLA researchers Dr. Edward Lee and Dr. Kevin Seals presented the research behind the design of their Virtual Interventional Radiologist (VIR) at the Society of Interventional Radiology’s annual conference. Essentially a chatbot, the VIR “automatically communicates with referring clinicians and quickly provides evidence-based answers to frequently asked questions.”
Currently in testing mode, this first VIR prototype is being used by a small team of UCLA health professionals which includes “hospitalists, radiation oncologists and interventional radiologists” (readers with a deeper interest in cancer treatments may want to read our full article about deep learning applications for oncology). The AI-driven application provides the referring physician with the ability to communicate information to the patient such as an overview of an interventional radiology treatment or next steps in a treatment plan, all in real-time.
VIR was built on a foundation of over 2,000 example data points designed to mirror questions that commonly come up during a consultation with an interventional radiologist. Responses are not limited to text in format and may include “websites, infographics, and custom programs.”
The research team integrated VIR with natural language processing ability using the IBM Watson AI system. In the tradition of customer service chatbots across industries, if VIR cannot provide an adequate response to a particular inquiry the chatbot provides the referring clinician with contact information for a human interventional radiologist.
VIDEOWith increased use the researchers aim to expand the functionality of the application, for “general physicians interfacing with other specialists, such as cardiologists and neurosurgeons.”
In March 2016, UCLA university-based researchers, were published in Nature Scientific Reports with a study combining a special microscope with a deep learning computer program “to identify cancer cells with over 95 percent accuracy.”
Photo (see Photonic time stretch microscope): http://ift.tt/1TRHOFQ
The photonic time stretch microscope, invented by Barham Jalali, the research team’s lead scientist, produces high resolution images and is capable of analyzing 36 million images per second. Deep learning is then used to “distinguish cancer cells from healthy white blood cells.”
Blood based-diagnostics are a growing sector and an increasingly competitive space as discussed in a recent TechEmergence interview with the founder of a leading firm investing in early-stage tech startups:
“…Looking at DNA, RNA, proteins, all kinds of biomarker information to diagnose someone as early as possible is definitely a very active area…for example if you look at Grail, the very large spinout from Illumina, they’re very well-funded and they’re trying to use DNA and other information from the blood to be able to detect cancer early.” – Shelley Zhuang, Founder and Managing Partner, Eleven Two Capital
It’s important to note that healthcare machine learning applications (unlike other applications in – say – detecting credit card fraud or optimizing marketing campaigns) struggle with unique constraints. Treating patients is a more delicate procedure than testing an eCommerce up-sell, and with regulatory compliance and a multitude of complex stakeholder relationships (doctors to use the technology, hospital execs to buy it, patients to hopefully benefit from it), we must be somewhat sympathetic with these top hospitals for not having tangible results from AI applications in such a touchy and new field.
Our interview with health-tech investor Dr. Steve Gullans covered the unique challenges of hospital AI applications in greater depth. Steve mentioned the overt fear that many specialist physicians feel around AI tools, and the other psychological factors that will likely make hospital adoption slow.
When asked how hospitals and healthcare facilities might get around these barriers (assuming the technology will in fact better the lives of patients), he expressed his opinion on where adoptions opportunities may exist:
“It is tough, but there’s always a few beachheads that are going to pay off; there are some applications right now where physicians don’t enjoy a particular kind of call or one where having some assistance can actually be a big benefit to everyone involved…I think what you’re going to see is very specific populations within a particular setting, such as calling a stroke in the ER as bleeding or non-bleeding, where there’s a life and death decision that’s very binary…” – Dr. Steve Gullans, Excel Venture Management
It’s also important to note that we should remain skeptical of technology applications until quantifiable results can be verified. As in nearly all other AI-infused industries, machine learning in healthcare is resulting in plenty of “technology signaling” (the hyped-up touting of “AI” for the sake of garnering attention and press, and not actually for improving an organizations results).
It’s mutually advantageous for an AI vendor like NVIDIA or Microsoft to grant a “new and super-fancy” AI technology to a top hospital, and grant the hospital the title of “founding technology partner.”
These kinds of events are nearly guaranteed to get press and (probably) reflect favorably upon both parties – whether or not the AI application ever drives results for either the hospital (efficiencies) or patients (better health outcomes).
We certainly didn’t compose this article to insult the efforts of any of the hospitals or vendors involved, but we as analysts must remain skeptical until results can be determined. Our aim is to inform business readers (like yourself) of the applications and implications of AI, and we always prefer projects with recorded results rather than “initiatives.”
That being said, it’s important for business leaders to understand the common trends of such developments to get a sense of the “pulse” of an industry, and we hope to have done just that in this article.
One of the reasons we insist that vendor companies list a client company and quantifiable result in our TechEmergence AI case studies is because – as with any emerging technology – AI is often used as a signal for the “cutting edge,” a tool for hype and not function.
We’re of the belief that some of the hospital AI applications highlighted in this article will in fact make their way to real and ubiquitous use (particularly those which we highlighted in our “insights upfront” section at the beginning of this article). Just when fruitful applications will become commonplace – time will tell.
Image credit: Static1
1. Some core features of Ethereum.
2. Watching and lamenting (?) the death of the NYC diner.
3. Top ten stock football (soccer) images?
5. When will the drone wars escalate?
6. Matt Klein on Harberger taxation and they assured us the proposal was not satire.
The post Sunday assorted links appeared first on Marginal REVOLUTION.
Blockchain startup Block.one announced this morning that it has raised $185 million in just five days of selling its EOS cryptocurrency token. That sum breaks the record Bancor set just a couple of weeks ago with its ~$150 million fundraise.
Block.one’s goal is to bring blockchain to businesses. It claims its platform offers a level of scalability unprecedented in the blockchain world; it would be able to process millions of transactions per second with no transaction fees, according to CoinDesk’s write-up of the company a few days ago.
Still, the CoinDesk story points out, the company has had little to show for its claims so far, something that has become characteristic for many startups launching initial coin offerings (ICOs).
This record-setting ICO comes amid a fresh surge in interest in blockchain technology and startups.
The full text of the company’s press release follows below.
Block.one, the developer of EOS.IO software, a new blockchain operating system designed to support commercial-scale decentralized applications, today has successfully received 651,902 ether (“ETH”), which is approximately US$185 million, in the first five days of its 341-day long token distribution. In exchange, 200 million EOS ERC-20 compatible tokens (“EOS Tokens”) were distributed to purchasers (representing 20 percent of the total one billion EOS Tokens being distributed).
The distribution of EOS Tokens began on June 26, 2017. The distribution uses a ground-breaking token participation model by creating what is intended to be the fairest token distribution project launched on Ethereum to date. This elongated timeframe eliminates the quick frenzy usually surrounding short token sales, and allows the community ample time to learn about the EOS.IO software being developed by block.one and participate in the token distribution if they wish.
The EOS Token distribution also approximates an auction where for every period, everyone gets the same price. At the end of a period, the respective set number of EOS Tokens for that period will be distributed pro rata amongst all authorized purchasers, based on the total ETH contributed during that period.
“We felt an approximately year-long token distribution was the best method to ensure people receive fair market value for EOS Tokens,” said Brendan Blumer, CEO of block.one. “We anticipate that strong interest will continue throughout the year as the community continues to learn about the EOS.IO software and the benefits it can bring to their business.”
Seven hundred million additional EOS Tokens (representing 70 percent of the total EOS Tokens being distributed) have been split evenly into 350 consecutive 23-hour periods of 2 million tokens each, and will be distributed at the close of each period. The remaining 100 million EOS Tokens (representing 10 percent of the total EOS Tokens being distributed) have been reserved for block.one as founder’s tokens pursuant to the feedback received from the community to ensure that block.one has aligned interests with those participating in the EOS Token distribution. If a blockchain adopting the EOS.IO software is launched, these founder’s tokens will be locked and released over a period of 10 years.
Many corporations are looking for a blockchain that provides the speed and performance required in order to run commercial-grade businesses.The EOS.IO software introduces asynchronous communication and parallel processing to support hundreds of thousands of transactions per second. The software on which EOS’s architecture is based establishes an operating system-like construct upon which applications can be built and eliminates the requirement for users to pay for every transaction. The software is intended to allow developers to build their own high performance applications on the blockchain and deploy their own monetization strategies without requiring users to necessarily pay to use those applications.
block.one intends for the EOS.IO software to support distributed applications that have the same look and feel as existing web-based applications, but with all of the benefits of the blockchain – namely transparency, security, process integrity, speed and lower transaction costs.