Why should you join a startup if the probabilities are not in your favor?
Startup L. Jackson writes a great note as always.
Mr. Mehta does an excellent summary.
And they capture the essence of the theory, joining a startup is a lifestyle choice and an opportunity to short-circuit the career advancement ladder. Or a way to learn new skills.
I joined Zynga because I wanted to work on hyper-scale infrastructure. And I got that opportunity in spades.
Without going into it, into too much detail, I lead a web-property, built out a 200 person dev-ops function, had a team that delivered many products that were used to operate games. And under my watch, we had less than 30 minutes of planned down time and delivered over 4 9’s of infrastructure availability. And got to build out a 3rd party API platform and kicked started an effort to create a gaming optimized mobile programming language
And I met a whole bunch of amazing people who are friends.
You don’t get that kind of crazy experience in 4 years at a large company. And I made the choice to go to Zynga to learn and I got that.
And this might make the backers of these deals happy that the employees got a first class education, and it doesn’t change the reality that there is something fishy with those deals.
Living in Greece, Dell computers were these awesome magical things that you could never own.
When I came to the USA in 1992 to study at Brown University I couldn’t wait to buy a Dell. My first Dell was bought in 1992. That Dell never stopped working.
Apparently neither has this one.
And so I know it’s just a side effect of some social media person filling out some form and cross checking VP’s at VMware with his LinkedIn profile and I know it’s nothing personal… And I still think it’s cool!
I wrote about this in 2008 on my now defunct corporate blog at NetApp. It’s fun to be working at a company that can actually create the IOPS tier.
Flash has once again thrown into stark relief the absurd classification of storage into tiers
Talk to a storage vendor and Tier 1 is their most expensive stuff. Talk to a storage architect and Tier 1 is their most expensive stuff. If you’re lucky there is some overlap.
Then we have Flash. Is it Tier 0? Does Flash make Disk Tier 5? What is the role of Flash and Disk? Is Disk the new Tape? So do we need to have Tier -1 for storage that is faster than Flash?
Then there is the whole disk storage is secondary storage. Secondary to what?
I never really did get all of those classifications of storage into tiers. I tend to think of storage in terms of how it is used.
So instead let me propose a new model for storage tiers based on the ratio of application CPU and memory, the amount of IOPS required and the capacity needs of the application or the ratio CPU:memory:IOPS:Capacity.
Based on that ratio there are three storage tiers
- Captive IOPS, where IOPS are all dedicated to a single application. In this deployment the ratio is 1:1:1:1. Add more CPU and Memory and you add more IOPS and Capacity. Because of the nature of the application and how many IOPS it consumes, there is nothing left over for another application.
- Shared IOPS, where IOPS are shared across a collection of applications. In this deployment the ratio is M:N:1:1. As you add more CPU and memory, the number of IOPS increase but not at the same rate. So you can share the IOPS across a number of applications rather than dedicating them to a single application.
- Capacity Efficient where the number of IOPS is dwarfed by the capacity requirements. In this deployment the ration M:N:1:Q. where as M and N increase, Q increases but IOPS do not. A good example is a backup server. As more data gets backed you need more capacity, but you don’t actually need more IOPS. Another good example is a home directory where capacity needs increase, but actual IOPS do not.
Next, I’ll explore the implications of these three tiers.
One of the more intriguing questions, I keep asking myself, is whether the myopic fixation on bubble valuations is reasonable.
At the end of the day, I think what I have convinced myself is that
- investors are not investing at ridiculous valuations,
- founders and very early employees are cashing out, and yes
- some people are going to lose money
- it’s not like it was 2001 where retail investors buying on margin were buying theglobe.com
Then who cares? The reason I do is because
- I like to understand things
- In any new financial instrument someone is assuming more risk and someone less
And I am beginning to conclude that employees are assuming more risk.
In the valley, we’ve created a system where you have a relatively low-base salary and a very high variable income. The goal of the system is to encourage employees to keep playing lottery with their time in the hopes that they strike it rich.
The problem with this system, from the perspective of the owners of the company, is that the more equity you hand over to the employees, the less equity the owners have.
If a company has 1000 shares, the goal of the owners is to maximize the number they own and minimize the number everyone else has. The problem is that the way employees think about their compensation forces the company to keep giving out shares to new employees, and thus the % of the company owned by owners shrinks over time. The company has to keep issuing new shares, because at some point they have no more shares to give out. The owner to retain his share of the company, has to keep buying shares and that increases his risk as more and more of his money is put into one company. And one important class of owners, the founders, typically doesn’t have the cash necessary to preserve his share of the company.
Therefore, the goal of the owner is to minimize the number of shares issued for employees. An approach to solving that conundrum is to increase the per-share value. The way you increase per-share value is have investors buy into the company at a high valuation. The problem is that investors don’t want to assume that kind of risk for their investment. And so we have liquidation etc preferences that allow the valuation to be set high but the effective purchase price to be set low.
To keep dilution to a minimum, the founders are able to drive the value of the stock up with investor money and allow the investors to not assume the risk of the high valuation…
The risk of high per-share price is transferred entirely onto the employees for the benefit of the founder and early investors.
Let’s try that again…
Any half-decent engineer will evaluate their salary like this:
Total Income = Cash + Equity
And stock option equity will be valued like this:
Equity = %of company (Expected Value at time of Cash Out – Current Value)
Any RSU equity will bed value like this:
Equity = #RSU * Current Value + #RSU * Expected Value of company when you cash out.
Suppose you are at a company X, your compensation at company X is TotalIncome(X).
When you go to a Unicorn what they will do to make a competitive and attractive offer is the following
TotalIncome(Unicorn) > TotalIncome(X)
Where TotalIncome(Unicorn) = Cash(Unicorn) + Equity(Unicorn)
The way they do it is by saying:
Equity(Unicorn) > Equity(X)
So far so good… Nothing wrong so far.
But remember the value of Equity is very dependent on two parameters:
- Current Value
- Cash out Value
And here’s where Unicorns can really hurt employees. Unicorns like to offer RSU’s.
- Because of the high current value of RSU they can offer a small number
- The cash out value – because it’s only common stock – only matters if the company IPO
Giving out a small number really matters to founders and investors who care about dilution. The more shares you give out, the less each share is worth. A high-growth company that is hiring a lot of people prior to the Unicorn phenomenon would keep printing shares to keep hiring employees and that would cause the early investors to get PO’ed. The stock dilution and the employee lockup was a big deal in 2001.
Not so much now.
And here’s how ….
If you are a Unicorn, any time you need to issue more shares or deal with compensation issues, you just artificially increase the value of your company through another round, and hand fewer higher value shares to new employees. This allows you to both simultaneously keep TotalCompensation competitive and keep the number of shares static.
If you are particularly craven as a Unicorn, you can have Cash be lower than your competition with a small number of RSU’s whose value is mythological.
so far so good.
And this is okay if and only if every single company IPO’s and the public markets agree to the private market valuations.
And that still would be okay if everyone was taking on the same risk. I mean everyone, founders, all employees and investors. Except they are not.
The founders benefit the most from the lack of dilution since they own the most shares at the lowest possible value.
The investors are buying into the company at a much lower value. Think about it, you as an employee are buying with your sweat equity valued at 1 billion, and the investor is buying equity valued at 200 million … The guy buying at 1 billion is going to be worse off than the guy buying at 200 million.
At some level, you can argue that I am just describing how start-ups work. And at some level I am…. Except … the valuations are not being set in the public markets, but in private ones, and the valuation is being set to a billion for reasons other than the actual earnings of the company.
The Unicorn valuations are useful for retention, hiring, advertising, lead generation and ego and a product of a negotiation with little downside for the people doing the negotiation.
In effect, employees are getting less equity based on valuation that has nothing to do with the actual earnings of the company … Instead they are getting paid based on a valuation whose opacity exists by design.
Unfortunately, this kind of shit never ends well…
Fenwick & West LLP put together another survey on the state of Unicorn financing.
Here are the most important bits:
- Entrepreneurs are becoming managers of companies instead of owners. The exit preferences really show that the investors are buying the company with an option to sell.
- Employees are really going to get screwed. Unless these companies are going to IPO investors are getting premium talent at a discount that only gets covered post IPO. And for employees who are looking at 10b exits these are rarer than a unicorn.
- Valuations are rigged to make the company look bigger to convince employees that the risk is lower so they join. We’re not an early stage startup we’re a big established company with massive stock upside (if and only if we net 10 billion post IPO)… Take this lower salary and smaller stock package because it’s a sure thing ™… Except it’s not.
Net net the game is rigged.
Been using Twitter for more than 5 years.
Have two sets of friends:
hockey and tech
Hockey folks don’t give a damn about tech
Tech folks don’t give a damn about hockey
Hockey folks don’t want to read about storage
Tech folks don’t want to read about the Habs
So I have two Twitter accounts
Really hated how there was no *easy* way to switch between them. I tweet less because of this. I read fewer tweets because of this.
Except there is … Can you tell me where?
Not there, stop looking.
What about here:
Can’t find it? Let me help:
I only discovered this when my finger accidentally hit that icon. Apparently mystery meat UI is in vogue …
I recently stumbled on to the Rust programming language.
What struck me was the promise of safety and performance – a C for the rest of us is the customer pitch.
And indeed Rust is a nifty programming language that tries to bridge a gap between managed code and unmanaged code. Managed code is code that has system managed memory, aka garbage collectors, and unmanaged code is code that relies on the programmer to manage the code directly.
Conceptually what they are doing is using the type system to enforce safety. This restricts what kinds of things you can do with pointers, but if the type system forbids certain activities then that’s okay and your program can fit into that model that’s okay as well.
There are papers from 15 years ago that explored this kind of concept: CCured: Type-Safe Retrofitting of Legacy Software – Rust almost represents a natural evolution of this thought process – don’t try to make an unsafe language safe, let’s try to make a language safe while retaining the ability to manage memory directly.
What is intriguing about Rust and what differs from the papers I remember reading so many years ago when I was a student at Stanford is that they are tackling the problem differently. Instead of asking: How do I make C safer? They are asking: How do I make it easier for Ruby programmers to write code that has memory that is unmanaged? Essentially they are posing the question – do we need garbage collectors at all? And if we don’t then that may have profound implications for how code gets written.
And as it turns out that the problem of enabling Ruby programmers to write unmanaged code is far more important to solve than the problem of making C safe, and I might even argue tractable.
What is fascinating is that since the early late 90’s the need for a language that fits between the need to actually manipulate direct memory regions and completely managed code remains and that space needs to get filled and Rust is a credible player in that space.
Just bought my first iPhone (iPhone 6 plus) after years of Android.
Not clear if I am delighted yet.
Scheduled some time with a stylist …