wrong tool

You are finite. Zathras is finite. This is wrong tool.

  • Email
  • LinkedIn
  • RSS
  • Twitter

Powered by Genesis

The problem with the mac

January 20, 2017 by kostadis roussos 1 Comment

Over the last several years, I have two discrete sets of workflows:

  1. Kostadis, the developer who wants a full Linux experience
  2. Kostadis, the guy who interacts with product managers, engineering managers and business leaders who require a complete Windows experience.

When IBM used to make the Think Pad, the solution was obvious: use VMware Workstation to create a Linux VM.

However, after IBM sold the ThinkPad to Lenovo, and Lenovo couldn’t retain the same quality, and the improvements of the Mac made the Mac an attractive compromise.

You could use a Mac and use Windows software like Outlook, while simultaneously having a native Unix development experience without dealing with the complexity of virtual machines.

The experience wasn’t Linux, and the quirkiness of Mac OS made things annoying, and yet it was close enough.

At some point in time, pre-Nadella, the crappiness of the Windows software on the Mac made a choice painful.  And at some point, the pain was significant enough to cause me to switch back to Windows.

A few months with the best Dell and Lenovo had to offer, and that transition lasted less than a year.

And after the utter underwhelming release of the latest Mac hardware, the opportunity to check out Windows hardware became an option.

And so I looked at what IT had to offer and discovered the Dell Precision 5510. The power of a modern PC coupled with improvements in virtualization software has meant that the overall value proposition of the PC + VMware + Linux to be superior to the Mac + Crappy Microsoft Apps + Not Quite Linux or Mac + VMware w Linux and Vmware with Windows or some flavor of those.

 

 

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Uncategorized

Automation of Mathematics

January 5, 2017 by kostadis roussos Leave a Comment

Many moons ago, I read a book about Admiral Pointdexter, and in this book, there was a reference to his Ph.D. in physics. What struck me was that the Ph. D. was a computation. He did the work of a computer.

And then this article popped up:

All The Mathematical Methods I Learned In My University Math Degree Became Obsolete In My Lifetime

Dr. Devlin began his career being a computer. And when calculators and the computers and then the cloud emerged, his ability to be a computer was displaced with ever increasingly sophisticated and faster computers.

What to do then:

So what, then, remains in mathematics that people need to master? The answer is the set of skills required to make effective use of those powerful new (procedural) mathematical tools we can access from our smartphone. Whereas it used to be the case that humans had to master the computational skills required to carry out various mathematical procedures (adding and multiplying numbers, inverting matrices, solving polynomial equations, differentiating analytic functions, solving differential equations, etc.), what is required today is a sufficiently deep understanding of all those procedures, and the underlying concepts they are built on, in order to know when, and how, to use those digitally-implemented tools effectively, productively, and safely.

In short, jobs that rely on the ability to execute repetitive tasks without understanding are going away to be replaced with jobs that require adaptability and are non-repetitive.

The downside to these new jobs is that their outcome and payout is less predictable.

The other downside to those new jobs is that they are not the old ones.

And the final downside is that the skills necessary to do the new jobs are different from the old ones.

And the real foundational challenge is that we are preparing our children in our schools for the old world order.

We are like a company caught in a huge disruption. On the one hand, the old business pays but is going away, and the new one is too small.

And the next 20 to 30 years will be gut-wrenching. What the Trump voters experienced, will be experienced across every form of human endeavor. If your job is to fit into a machine, the machine will replace you. If your job is to figure out what tools to use or how to invent new machines, then there is a place for you.

Teaching kids to find the white space is the only important thing that we should be teaching them.

 

 

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Uncategorized

The correlated risk of the valley

January 1, 2017 by kostadis roussos 2 Comments

The past eight years have been great for the Valley. Before 2008, the valley built technology for large corporations that in turn would use the technology to optimize their businesses. Now the valley is creating new businesses that happen to use technology.

In short order, we overturned the TAXI industry, created the first new car company of note, transformed how we interact with each other, radically transformed how content gets created and delivered, transformed food delivery, are disrupting pay-day loans, and the list goes on.

At the heart of the business models is an understanding of how people interacting with intelligent machines can efficiently deliver services that in the past were too costly to provide.

We have gone from being the disruptors to becoming mainstream.

When Mark Pincus used analytics to help create Zynga, the gaming industry puked all over us. Now, every single game company uses some amount of data analytics to optimize their games.

And that got me thinking.

We have created a bland uniformity in our corporate structure. Our companies look the same, have the same kind of people in it, are structured the same and are leveraging the same kind of technology.

Our venture capitalists are pursuing the same sort of risk mitigation strategies. Distributing their bets across as many good deals as they can find. And yet, the underlying technology structure of most of those bets is similar.

The last time this kind of thing happened was in the banking crisis of 2008 when every single banking company was pursuing the same business strategy leveraging the same algorithms to reduce risk and as a result exposing themselves to the same underlying catastrophic risk.

And startups are doubling down on the intelligent machine model. For example, Zappos is trying to fix human interaction. The Zappos solution is to seek to replace the ambiguity of human relationships with the structure of software systems.

One of my favorite thinkers is Nassim Taleb. His books are difficult to read. And yet he makes a profound point. The more you try and avoid risk, the more robust you make a system, the more fragile it becomes because any remaining weaknesses will obliterate everything.

In our case, the valley is trying to de-risk human decision making using intelligent machines.

There is too much sameness, too much of the same kind of operating model.

And when you see this amount of similarity, you know that this entire world will get disrupted somehow.

My belief is that the limits of intelligent machines are poorly understood. And the faith in the power of those tools will lead to massive amounts of correlated failure. The failures will occur simultaneously because of the sameness. And the effect will be a broad-based failure.

The companies that do disrupt the current valley will be those that understand the limits of machine learning and figure out how to use the human brain, not to make the algorithm more efficient, but to enable the human brain to do things it could not.

What that thing is, is unknown and the timing of the disruption is also unknown. The only thing I am certain of is that both will happen.

 

 

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Uncategorized

Normalizing Horror and Engineering Ethics

January 1, 2017 by kostadis roussos Leave a Comment

I am reading a fascinating book these days, titled: Enhanced Interrogation: Inside the Minds of and Motives of the Islamic Terrorists Trying To Destroy America

The author describes his work in using torture, euphemistically called Enhanced Interrogation Techniques (or EIT) to fight the war on terror.

He not only used EIT, but he also invented many of the procedures and protocols.

And in many ways, he was very successful. EIT works.

The distinction between torture and EIT, of course, is perhaps a matter of perspective. James used techniques that were more effective and efficient. And the goal wasn’t pain; the goal was to break down the resistance of the evil terrorist.

Using torture (aka EIT), James was able to break people who otherwise were incredibly stubborn and difficult to break.

What is fascinating is the realization that the two inventors of new efficient torture or Enhanced Interrogation Techniques, felt trapped.

On the one hand, they were the experts and could do a great job that balanced the need to extract information and the need to be brutal and on the other the realization that if they didn’t participate innocent people would die and fewer expert torturers would be used.

Repeatedly in the book they try and explain their moral dilemmas and their personal repulsion to the whole activity and their attempt to figure out where exactly was this line they felt they may be crossing.

Reading the book, made me think of the morality of expert advice. Were these torturers making a moral choice?

Is the defense – well the alternative would have been worse a good one?

As technologists, increasingly we will be asked to do evil things. And is the defense that the alternative is worse, defense?

As a student of the second world war, if Albert Speer had not helped Nazi Germany the war would have ended sooner. If Field Marshall Gerd von Rundstedt had not been as effective a defender of Germany, fewer lives would have been lost.

In short, less evil is still evil.

To quote Gandhi, the only moral response to evil is non-co-operation. Mitigating evil, will not make the evil less evil.

I hope never to have this kind of moral quandary.

What I do know is that as more and more of evil will require engineering, we all have to ask ourselves, is it better to mitigate evil or not to do it?

And I hope, and I pray that we all choose the only moral choice, there is no mitigation of evil, there is only non-cooperation.

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Uncategorized

AWS and the automation of retail

December 29, 2016 by kostadis roussos Leave a Comment

I as noodling on how automation was affecting industries. And I was also noodling about cloud in my role at VMware.

And that got me thinking about what is going on with retail because it is the Christmas season.

Amazon is forcibly re-engineering the entire retail supply chain to be digital.

You use a mobile device to find and then buy stuff. If your business doesn’t have a mobile presence, your business is not reaching a staggering number of customers.

The change from brick-and-mortar to digital interaction is so huge that it’s got its own name: Digital Transformation.

Then this got me thinking about, how does this affect society?

The computers sitting in the cloud are doing the job of the retail employee who would help you find stuff, and then ring you up at the register.

 

This retail season, I spent a lot of time thinking about the macro of the cloud. And I realized that the macro of the cloud is that anyone in the retail industry is moving to a cloud service model because they need a peek burst capacity. During the gift-giving season, retail makes more money and employs more people than at any point in time. And the total number of people they require during the low retail season is significantly less.

And the computing capacity required during the low retail season is significantly lower. And since the fixed cost of peek burst capacity is very high, it makes a lot of sense to spin up capacity on demand in the cloud.

And that got me thinking – what happened before?

And the answer is what we used to call seasonal hiring.

And if I was right then the impact of automation on seasonal hiring should already be visible in hiring patterns.

And lo and behold:

http://www.retaildive.com/news/bucking-trend-jc-penney-hiring-many-more-seasonal-workers/426625/

Last year’s job gains were 1.4 percent lower than 2014 figures, according to employment data from the Bureau of Labor Statistics cited by Challenger, Gray & Christmas. “We continue to move from brick-and-mortar toward click-and-order,” Challenger, Gray & Christmas CEO John A. Challenger said in a statement. “But even in the internet era of holiday shopping that means that brick-and-mortar fulfillment facilities need seasonal workers.”

 

 

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Jobs, Uncategorized

A Case Study in Hiring in the Unicorn Era.

April 23, 2016 by kostadis roussos 1 Comment

There was a recent article on business insider that described how an engineer managed to get an annual salary of 250k.

What is interesting is not the negotiations, what is interesting is how fake valuations for an illiquid stock are being used to raise wages.

The lead of the article is that the engineer got 250k. The interesting  bit is at the bottom:

On all but two of his offers, he negotiated. The base salary was mostly the same at around $130,000 a year. He negotiated more aggressively on RSUs and signing bonuses.

Fascinating.

Non-Unicorns have a hard time competing for talent. 

No company was offering more in cash. Every company was offering more in equity. And here’s where Unicorn valuations help. A Unicorn can make a salary offer that is worth a lot through their RSU’s, an offer that can compete favorably with Google even though – in practice – the liquidity of Google and the values of the stocks is an apple to oranges comparison.

And so if you are not a Unicorn, you can’t compete. You simply don’t have the valuation necessary to get people to join. And this forces more and more companies to become unicorns so they can compete for talent.

 

Unicorns have to compete on dollars with Google

AirBnb could not offer less equity than Google and convince the employee to join. Theoretically, AirBnb should be able to offer less equity and win because of the growth potential. And yet AirBnb was unable to close this deal.

The net is that Unicorns are treating their private valuations as public valuations and using the face value of the valuations to attract and hire talent. Because the RSU’s are being treated as equivalent to Google stock even though they are much riskier, a down-round can be terrifying to a Unicorn. If employees are looking for a lottery ticket instead of a mission, a down round can result in a horrific wave of attrition. Stock going down in the public markets is annoying, stock going down in the private markets may convince employees that the stock may never be liquid. In effect, a down round may convince employees that the stock is now worthless.

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Uncategorized

The future 

April 16, 2016 by kostadis roussos Leave a Comment

  
I read about the first vr system in the foundation in my teens. I saw the first at brown at 20. My son saw his first at a museum at 5. His child will have occular implants. 

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Uncategorized

Why join a startup?

December 28, 2015 by kostadis roussos Leave a Comment

Why should you join a startup if the probabilities are not in your favor?

Startup L. Jackson writes a great note as always.

Mr. Mehta does an excellent summary.

And then there is this article 

And they capture the essence of the theory, joining a startup is a lifestyle choice and an opportunity to short-circuit the career advancement ladder. Or a way to learn new skills.

I joined Zynga because I wanted to work on hyper-scale infrastructure. And I got that opportunity in spades.

Without going into it, into too much detail, I lead a web-property, built out a 200 person dev-ops function, had a team that delivered many products that were used to operate games. And under my watch, we had less than 30 minutes of planned down time and delivered over 4 9’s of infrastructure availability. And got to build out a 3rd party API platform and kicked started an effort to create a gaming optimized mobile programming language

And I met a whole bunch of amazing people who are friends.

You don’t get that kind of crazy experience in 4 years at a large company. And I made the choice to go to Zynga to learn and I got that.

And this might make the backers of these deals happy that the employees got a first class education, and it doesn’t change the reality that there is something fishy with those deals.

 

 

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Jobs, Uncategorized

Hello Michael!

December 14, 2015 by kostadis roussos Leave a Comment

  
30 years ago I remember reading about Dell Computers and wanting one very badly.

Living in Greece, Dell computers were these awesome magical things that you could never own. 

When I came to the USA in 1992 to study at Brown University I couldn’t wait to buy a Dell. My first Dell was bought in 1992. That Dell never stopped working.

Apparently neither has this one. 

And so I know it’s just a side effect of some social media person filling out some form and cross checking VP’s at VMware with his LinkedIn profile and I know it’s nothing personal… And I still think it’s cool!

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Uncategorized

The end of storage tiers

July 2, 2015 by kostadis roussos Leave a Comment

I wrote about this in 2008 on my now defunct corporate blog at NetApp. It’s fun to be working at a company that can actually create the IOPS tier.

Flash has once again thrown into stark relief the absurd classification of storage into tiers

Talk to a storage vendor and Tier 1 is their most expensive stuff. Talk to a storage architect and Tier 1 is their most expensive stuff. If you’re lucky there is some overlap.

Then we have Flash. Is it Tier 0? Does Flash make Disk Tier 5? What is the role of Flash and Disk? Is Disk the new Tape? So do we need to have Tier -1 for storage that is faster than Flash?

Then there is the whole disk storage is secondary storage. Secondary to what?

I never really did get all of those classifications of storage into tiers. I tend to think of storage in terms of how it is used.

So instead let me propose a new model for storage tiers based on the ratio of application CPU and memory, the amount of IOPS required and the capacity needs of the application or the ratio CPU:memory:IOPS:Capacity.

Based on that ratio there are three storage tiers

  1. Captive IOPS, where IOPS are all dedicated to a single application. In this deployment the ratio is 1:1:1:1. Add more CPU and Memory and you add more IOPS and Capacity. Because of the nature of the application and how many IOPS it consumes, there is nothing left over for another application.
  2. Shared IOPS, where IOPS are shared across a collection of applications. In this deployment the ratio is M:N:1:1. As you add more CPU and memory, the number of IOPS increase but not at the same rate. So you can share the IOPS across a number of applications rather than dedicating them to a single application.
  3. Capacity Efficient where the number of IOPS is dwarfed by the capacity requirements. In this deployment the ration M:N:1:Q. where as M and N increase, Q increases but IOPS do not. A good example is a backup server. As more data gets backed you need more capacity, but you don’t actually need more IOPS. Another good example is a home directory where capacity needs increase, but actual IOPS do not.

Next, I’ll explore the implications of these three tiers.

Share this:

  • Email a link to a friend (Opens in new window) Email
  • Share on Reddit (Opens in new window) Reddit
  • Share on X (Opens in new window) X
  • Share on Tumblr (Opens in new window) Tumblr
  • Share on Facebook (Opens in new window) Facebook
  • Share on LinkedIn (Opens in new window) LinkedIn
  • Share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Uncategorized

  • « Previous Page
  • 1
  • 2
  • 3
  • 4
  • Next Page »
%d