wrong tool

You are finite. Zathras is finite. This is wrong tool.

  • Email
  • LinkedIn
  • RSS
  • Twitter

Powered by Genesis

The correlated risk of the valley

January 1, 2017 by kostadis roussos 2 Comments

The past eight years have been great for the Valley. Before 2008, the valley built technology for large corporations that in turn would use the technology to optimize their businesses. Now the valley is creating new businesses that happen to use technology.

In short order, we overturned the TAXI industry, created the first new car company of note, transformed how we interact with each other, radically transformed how content gets created and delivered, transformed food delivery, are disrupting pay-day loans, and the list goes on.

At the heart of the business models is an understanding of how people interacting with intelligent machines can efficiently deliver services that in the past were too costly to provide.

We have gone from being the disruptors to becoming mainstream.

When Mark Pincus used analytics to help create Zynga, the gaming industry puked all over us. Now, every single game company uses some amount of data analytics to optimize their games.

And that got me thinking.

We have created a bland uniformity in our corporate structure. Our companies look the same, have the same kind of people in it, are structured the same and are leveraging the same kind of technology.

Our venture capitalists are pursuing the same sort of risk mitigation strategies. Distributing their bets across as many good deals as they can find. And yet, the underlying technology structure of most of those bets is similar.

The last time this kind of thing happened was in the banking crisis of 2008 when every single banking company was pursuing the same business strategy leveraging the same algorithms to reduce risk and as a result exposing themselves to the same underlying catastrophic risk.

And startups are doubling down on the intelligent machine model. For example, Zappos is trying to fix human interaction. The Zappos solution is to seek to replace the ambiguity of human relationships with the structure of software systems.

One of my favorite thinkers is Nassim Taleb. His books are difficult to read. And yet he makes a profound point. The more you try and avoid risk, the more robust you make a system, the more fragile it becomes because any remaining weaknesses will obliterate everything.

In our case, the valley is trying to de-risk human decision making using intelligent machines.

There is too much sameness, too much of the same kind of operating model.

And when you see this amount of similarity, you know that this entire world will get disrupted somehow.

My belief is that the limits of intelligent machines are poorly understood. And the faith in the power of those tools will lead to massive amounts of correlated failure. The failures will occur simultaneously because of the sameness. And the effect will be a broad-based failure.

The companies that do disrupt the current valley will be those that understand the limits of machine learning and figure out how to use the human brain, not to make the algorithm more efficient, but to enable the human brain to do things it could not.

What that thing is, is unknown and the timing of the disruption is also unknown. The only thing I am certain of is that both will happen.

 

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Uncategorized

Comments

  1. Dali Kilani says

    January 2, 2017 at 11:26 am

    Happy new year 🙂 oh yeah did they puke all over us back then…

    One simple example to illustrate your point is how much stuff goes down when aws has (serious) problems… It’s a typical example of the sameness leading to more catastrophic outcome when a less probable event happens.

    I do agree that the current trend around ML and AI is getting up really fast along the hype curve and we’re ripe for a big disillusionment…

    Reply
    • kostadis roussos says

      January 2, 2017 at 5:22 pm

      Remember https://en.wikipedia.org/wiki/Cow_Clicker?

      And his talk?

      And then the success of his parody game?

      If only he had big-data, he might have been able to be huuuge.

      And yeah, AWS is another great example. And yes, I agree with ML/AI hype, and I think there is something more disturbing going on.

      There is a fetishization of data to replace human intuition and human thinking processes and turning humans into more machine like entities.

      And that trend is an ongoing trend that reflects our love for the computer systems we work with.

      Reply

Leave a ReplyCancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d