wrong tool

You are finite. Zathras is finite. This is wrong tool.

  • Email
  • LinkedIn
  • RSS
  • Twitter

Powered by Genesis

Warhol and Decoding the Past

April 24, 2014 by kostadis roussos 1 Comment

The recent announcement from the Warhol archives about recovering some Amiga art

got me thinking about how miraculous this feels and I am not talking about Warhol using an Amiga, but us being able to recover the data.

When I think about signature scenes in science fiction, the activation of the ancient computer and recovery of the data from the data banks resonates.

LostcityPart111

 

The story resonates because it goes to the even deeper tale of finding ancient knowledge lost in the ground, a story that the historian in me finds particularly alluring.

As I got older, and especially after a decade at NetApp, I learned how absurd the tale was. There are several challenges in decoding data. The first is the media itself requires special devices that fail over time and are no longer built, the second is the data format itself is tied to some software that requires a special piece of hardware that may  no longer be manufactured and has failed, and finally there is the very unfortunate reality that the media itself fails over time. This doesn’t even begin to address the challenge of information encoding and decoding and, of course, encryption…

One of the things that fascinates me about the future and the cloud is that when some bad event happens, vast amounts of knowledge will be lost. What if all of science and technology abruptly vanishes in a sea of lost electrons leaving the future with nothing but documents about stuff we invented last century, that we might have destroyed because it was digitized … Who wants to keep dead tree media when you can digitize it?

And even if I don’t care about the cloud, I worry about me. Will everything about me get deleted because my credit card expired after I died?

And when I think about that some more, and I think about biodegradable infrastructures, and products, I begin to wonder, what if there was another civilization like ours, that had completely digitized their knowledge, had a biodegradable physical infrastructure and some catastrophic event occurred that only left these primitive dwellings behind?

So when I see someone pull up an old image that was stored on electronic media on a long-lost computer, I am amazed…

And then I wonder, will we have a profession of computer archaeology that emerges 100 years from now?

 

 

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

Supercomputer Markets and the PC

April 21, 2014 by kostadis roussos Leave a Comment

My first job out of school was in 1995 working for Silicon Graphics on high end supercomputers. In particular, I was working on the Irix kernel scheduler.

In fact, my first paper was about a piece of technology that now is viewed as very archaic: batch scheduling. At the time the problem was an area of active research…

Supercomputers were about to hit a brick wall that year. A combination of the end of the cold war that killed DARPA funding and the increased performance of x86 processors and networks that made clustering technologies good enough for an increasingly large share of the computational pie.

In parallel, while SGI’s server business imploded, the high end graphics business imploded as well.

The Octane was supposed to be the next generation high end workstation,

SgiOctane

only to discover that the combination of AGP and Nvidia made the value of a completely custom design… well less valuable.

1996 was the last profitable year for SGI.

One of the more vivid recollections I have about the era is the discussion of the Top 500 supercomputer sites. Folks in the supercomputer biz bemoaned that Intel would soon dominate the list with commodity computing systems… That the entire era of supercomputers with their amazing underlying technologies was about to go away.

Nearly a decade after I left SGI, I attend a conference at Vail where I heard the exact same speech.

10 years later, the top 500 sites still has a collection of eclectic system designs.

And that got me thinking about supercomputers and their markets and the business economics …

The most important part of this blog is to tell you that there are other people who have written about this elsewhere. The most famous, and the most brilliant is book is titled The Innovators Dilemma by Clayton Christensen. If you haven’t read that book, stop and go read it.

Waiting.

Did you read it?

Waiting.

Good, you’re done.

Alright.

So what is a Supercomputer market? A supercomputer market is a market where the computational requirements are increasing faster than Moore’s law or are inherently so large that conventional computing systems are too slow at present and for the foreseeable future.

A much greater Systems Architect than I described it as: The customer wants the performance to increase 4x every two years.

To deliver that kind of performance, vendors have to deliver exotic computational architectures that are at the limits of what humanity can create at this time.

The price the customer pays for that kind of horse power is determined by the business value of the problem being solved.

And as long as the customer’s computation needs remain outside of what conventional computer systems can build every subsequent generation will command about the same price or more.

The nice property of supercomputer markets is that it’s practically impossible for new entrants to compete in the space unless they can figure out how to fundamentally disrupt the incumbent. Especially if the market has been evolving for a long time.

If you had to build a new supercomputer from scratch the capital cost would be staggering, never mind the challenge of finding the brain power necessary to build it outside of the established vendors.

The amazing sweet spot for a business is when a supercomputer market is a broad market that has huge computational needs that can only be satisfied with supercomputers. Basically you are building these amazing computers that people will pay a large chunk of cash because the choice is buy them or not be in the business that requires them. And there is a lot of those people.

A key indicator of not being in a supercomputer market is if the customer doesn’t have to buy the next generation of hardware and can also remain in the business that use them. In other words, the next generation is an improvement but no longer essential

How is this different from The Innovators Dilemma?

The Innovators Dilemma focuses on individual vendors and how they get disrupted but ignores the broader market trend. In point of fact, the Innovators Dilemma focuses on how new architectures can disrupt alternative architectures while the broader supercomputer economics trend remains.

If we look at disk drives, the canonical example in the book, the Innovator’s Dilemma observes that disk drive vendors came and went. My observation is that there was a macro need for more storage, and as long as that remained true supercomputer economics would hold true. The broader supercomputer economics trend held true even as many vendors got disrupted… It was only in 1996 when capacity prices collapsed that the average computer user had their average storage needs satisfied.

My favorite example is the PC because it’s not a supercomputer 🙂

For almost 18 years, consumers would spend about 5k on a new PC, because the next generation PC was so much better than the last generation PC. The computational needs of consumers was inherently greater than what computers could deliver in 1981 and remained so until 1999. For 18 years all you had to do if you were Dell is build a computer and people would buy it because their needs were unmet with the current generation computer.

And that brings me to the problem with supercomputer markets …

It turns out that there are two kinds of supercomputer problems. The first is what I call inherently computationally hard. These problems are the kinds of problems where you are trying to simulate physical processes or dealing with hard problems and as a result are inherently computationally expensive and will remain so indefinitely. The second kind of problem is what I call capped computational problems.

And maybe this is my second insight.

A capped computation problem is one where humans are consuming the computation directly. If you are building something for people, eventually you run into that bottleneck – the human ability to perceive and interact.

Put differently, supercomputer markets can exist indefinitely as long as you are processing machine level interactions that are not gated by human processing.

So returning from orbit… 

So what ends this kind of macro trend?

  1. Solving the problem
  2. Lack of interest in the problem

Some examples of 1 include things like extreme high-end 3D graphics, and PC’s. An example of 2 is the cold war dividend when Clinton cut funding into the military and that cut funding for the purchase of supercomputers because a class of problems were simply no longer that interesting.

The problem with supercomputer markets that are constrained by human perception is that eventually the computers get too fast. And the reality is that they eventually end. This doesn’t meant that the market for the product but the very nature of the market for the product changes.

So what happens when supercomputer markets end?

Typically the incumbent vendor goes out of business super-fast. Basically no one wants to buy their next generation hardware because the last one solved the problem.

And at that point the market transitions to an enterprise market with different economics. The most important being that customers want the next generation to be twice the performance at half the cost.

Visually

2014-04-20_2254

 

What this picture tries to show is that while customer demand is unmet by the supercomputer technology, the supercomputer technology continues to thrive, selling each new generation of hardware. Once the supercomputer technology meets customer demand, then customer demand shifts to mainstream technologies.

This is not a case of mainstream getting good enough, instead its a case of the customer no longer caring for incremental improvements because the problem is solved.

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Selling

Do not move fast and break things

April 13, 2014 by kostadis roussos Leave a Comment

If you’re an engineer, the probability of understanding the life and times of folks in IT is marginal at best.

As an engineer our job is to figure out new ways of building things. We’re paid to innovate and break things. And when the things we innovate on have big outcomes we get big rewards.

IT, on the other hand, has a different function. IT’s job is to ensure that the there are standard operational systems in place that allow the business to run efficiently. In many ways, IT is not there to innovate. In many corporate environments IT exists to thwart unnecessary innovation. Essentially lines of business are allowed to innovate in their areas, but to get something deployed at scale the conservative time consuming IT processes minimize the risk of some new disruptive immature technology taking down your company.

Most technology companies refer to this as the “extended enterprise sales cycle“.

So if Facebook says their motto in engineering is “Go fast and break things”, in IT it would be slightly different:

2014-04-12_2149

If you are in IT, a way to advance your career is to advocate a new product or technology and have that product or technology work out. Using the technology becomes a bet your career kind of moment.

And if the technology doesn’t work, well that is the end of your career. You’re the douche that introduced Product A that everybody hates and everybody says should be fired. How many of us have sat around using some piece-of-shit product screaming for the head of the fool that forced us to use it? Many. Many. Many.

The old adage – no one ever got fired for buying X – is real. If you make a safe choice, and it doesn’t pan out, it’s not your fault, it’s the vendors fault. If you make a risky choice, and it doesn’t pan out, it’s your fault.

As engineers this is perplexing. The whole point of innovation is to move the ball forward so failure is normal, but IT doesn’t operate at the edge of innovation. IT must keep the lights on.

So when someone in IT bets on your technology they are betting their entire future on you.

Let’s dig into that a little bit more.

Typically you’re selling a piece of technology into a space that IT already has inadequate solutions. The natural and correct reaction of folks in IT is to:

  1. deny the problem
  2. try to solve with the existing vendors
  3. only if they are desperate try something new.

Why do they deny? 

Because introducing the new technology means disruption and change. And when you’re short on resources trying to keep the lights on, the last thing you want is disruption and change. Most of the time, people complaining about a problem just go away, so it’s best to just wait and see if this is a real issue.

As for why stick with the existing vendors?

Because the minute you introduce a new vendor there was one way of doing things, now there is two, and that means that everything gets more complicated.

Because you have to keep two sets of employees trained one for vendor A, and one for vendor B. And you can sit there and wait for a heterogenous management solution, but you are finite.

Worse new vendors have new issues. The existing vendor you understand them, they are like a partner you’re comfortable with, things might not be perfect but you understand each. The new vendor is like this exciting new partner that promises so much, but once you get past the first date you start to learn things and it will take a while to get comfortable.

If you are desperate try something new

And so when you’re desperate you will be forced to try something new. The problem will not go away, and your existing vendor can’t solve it.

The dice are rolling…

So you do extended Proof’s of Concept (PoC) to prove the new guy can’t solve it either. Because you really really want the new guy to go away so that the problem will go away.

But the new guy doesn’t fail, and the PoC was successful so you’re at the roll the dice moment.

Let’s take a step back and think about this situation: You have an acute problem that you can only solve with new technology and if the outcome is a failure it’s your fault because it was your job to make the right decision.

This is usually is a make or break your career moment.

2014-04-12_2155

If the new vendor works out, you’re a hero. If the vendor doesn’t your replacement can figure out what to do next.

And the stakes are typically not that high… it’s not like sales really needs their email or the web site needs to be up.

So what happens when your new technology fails?

Bad things.

The acute problem is still acute and needs to be solve so your boss looks for new ideas and new solutions, typically elsewhere.

It’s not bad, it’s not evil, it’s just the way the world works.

The guy who bet his career on the outcome is now wondering how long it will take for him to recover from the mess…

Some final thoughts. 

As a vendor, I remember selling a piece of technology to a customer, and the technology failing. The effect of the failure was cataclysmic, our ability to penetrate into parts of that customer’s infrastructure was permanently handed to our competitors. The fact that underlying problem existed in all vendors infrastructure was irrelevant. Our product had failed. As far as the person who had bet their career on our tech, he was never going to trust us again. I remember the anger in his voice, and the feeling of outrage.

As an engineer, I was kind-of-like: Dude get over it.

Until I was on the receiving end of such a mess.

A vendor made a good decision for them, that screwed my company over. And I can’t argue with their decision, and they were very good about telling me their decision, but the reality I was screwed, my team was screwed, and my career was hurt. The net effect of that right decision, I will never trust that vendor again. And that means I’ll buy what they have, but I won’t ever bet on them keeping their roadmap promises.

Promises we make to our customers affect their lives. And those promises when we fail to deliver can cost careers. Not ours, theirs.

At some level, you start to wonder, what does this really mean? Well if the customer is a really prominent player in an industry, and they fail with your tech, the conservative nature of IT will be to reject your tech for a good long time. The incumbent you are displacing will use that one failure to keep you out for a very very long time. Worse, the guy who took a risk, will probably tell all of his friends what he thinks about your technology. And everyone will look at this guy as a cautionary tale, that if you bet on vendor X, you too can become a bitter grumpy old dude, so better to use the standard technology. Let someone else take the risk.

As I look at the world with slightly more jaded eyes, I realize IT is Zathras. They are always confronted with the wrong tool, worse unlike Zathras they don’t understand how the tool is built and when they pick the wrong tool, they find themselves in the wrong place …

So to all the folks in IT my products ever screwed, I feel your pain and I am sorry.

 

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Selling

A relativistic theory of innovation

April 6, 2014 by kostadis roussos

One of the most irritating conversations I am dragged into, is that moment when some senior management person or some senior technology boss says:

We need to do more innovation.

My heart stops. Oh-god-no… If we’re asking the question we’re already screwed.

And Lord Help Me, but the conversation that always follows the mandate for more innovation is the debate:

What is innovation…

And then every single person in the organization becomes a theologian debating how many angels can dance on a the head of a pin… We have offsites, and training classes, and meetings and posters and metrics and …

Because, ultimately, one man’s innovations is another woman’s obviously boring idea.

And that got me thinking. What if the man and the woman are right? That innovation is actually relative to your point of view?

When I think about innovation, I think of the process of doing something new. The question then becomes, what is new?

Let’s consider three interesting edges in the world of human knowledge. One edge is the true edge of the unknown, this is the limit of what is documented and understood by any one person. Another edge is the edge of what can be commercially built. And another edge is the mainstream of what can be commercially built.

The true edge of the unknown is where academic research lives. The second edge, is where new products and businesses live. The third edge is where the vast majority of all research and development occurs.

2014-04-06_1452

The picture above shows these three edges. What it also shows is how the further you are at the limit of human knowledge the more what every else is doing appears less innovative, and the further you are towards the mainstream the more magical everything else everyone else is doing appears. And I’ll contend here that that’s the wrong view of the universe. And that a lot of our discussion about innovation is really driven by this relativistic picture. Folks on the right look towards the left and want to be more like the people on the left without realising that they don’t have to.

One of my recent, hypotheses, is that anyone who is on any one of those edges will see any new idea as obviously flowing from the context of the ideas surrounding them. The idea may be novel, but not entirely surprising. At the edge, everyone is looking at the same sets of problems and data, it’s the solutions that are unique. Every so often, there will be something that comes out of left field, but that is for the most part rare.

You don’t come up with a research idea in a vacuum. There is a field, the field has a set of problems that are of interest, and your ideas are vetted in the context of those problems of interest and the quality of the ideas you put forth.

As a concrete example, consider theoretical physics. I recently read this fascinating book by Leo Smolin that has a long diatribe about string theory but beneath that surface offers a unique perspective on how science is done. Essentially what theoretical physicists do is they come up with theories by looking at data, exploring math  as they try to make fundamental insights about the universe. Every so often an idea will have enough merit that other theoretical physicists will take that idea and refine it further, until at some point in time the refinement results in an experiment that requires even more science to do that proves the theory. The whole process can take over 50 years from start to finish.

There are three essential edges, one edge is coming up with brand new fundamental ideas of the universe, another figuring out those ideas, and another figuring out how to turn them into experiments.

So where is innovation happening? In all three areas. Every person is moving the frontier of human knowledge a little bit further.

And yet, the guy who came up with the Higgs boson, is far more well known that the vast army of people who turned that idea into an experiment that could verify the result. I remember hearing an interview with the director of the experiment who had to explain to the NPR reporter how much science and engineering had to be created just to validate that theory.

So is Higgs the only innovator or is everyone else in between an innovator as well?

And that got me thinking that it’s all relative. Everyone is innovating at every step of the way, and our reward and star system makes it impossible for us to recognize all of those incremental innovations. We reward the guy with the fundamental idea with a Nobel prize and ignore everyone else who turned it into something real.

But so what? Well the reason this matters to me is that when my boss brings up the dreaded We need more innovation discussion, I worry that his idea is we need more Higgs’… And that by creating that bar for innovation we’ll essentially tell everyone that unless they come up with new fundamental ideas they don’t have anything of any value…

And so coming back to my discipline, I think of Google. Google is about a decade ahead of everyone else in data center infrastructure. You read their papers and think, my God they are magicians, and to a certain extent they are. More importantly they are also at the edge of what is commercially build-able and they, uniquely, have access to the data and problems that are worth solving in 10 years, and so are able to solve them first. For the rest of us struggling in the relativistic past, it can feel like we’re in the stone age, being blessed by these aliens with gifts from the future. But really what’s going on is that they are just seeing the problems before the rest of us have a chance.

And so if you’re in the same space as Google, you can feel worthless because you’re not innovating like them. They publish papers and industries emerge, you’re just trying to keep some MySQL DB’s from crashing over…

But I wonder, if at Google they feel the same way I do … I wonder if at Google the solutions are considered novel, and ingenious but not miraculous. Perhaps like Arthur C. Clarke said their sufficiently advanced technology appears like magic …

So then coming down to next edge which is the mainstream world of technology where you are not at the bleeding edge of the known or the commercially viable, is innovation possible?

And I think it is. Because innovation is about looking at data and the problems worth solving and solving them in a way that is novel. Now the novelty may not be that novel because you’re taking something someone else did and adapting it to your context, but that adaptation requires thought and some novel ideas.

But then isn’t that what almost all innovation is? Taking some ideas from some place, looking at some data, considering what the important problems are and trying to do something new?

So then if that’s the case, then the answer to my boss is relatively simple:

Yeah, I agree. Let’s look at some data, let’s look at the problems of interest and let’s try to solve them in some way that is different than how we were solving them yesterday.

And to my current boss’ credit, that’s where we landed. So if you’re job is to stop the MySQL DB’s from crashing and come up with a way of doing it that was better than the way things were working a day ago, then you just innovated and that’s cool.

But I can almost hear the voice of some smart engineer who says: but that’s not real innovation… real innovation is X or Y or Z … and I respond:

What is innovation is relative, and you can innovate on all edges and the only real innovation is the process I just described.

But I am a little bit dissatisfied with that answer. So I will amend it here.

Here’s what I will concede, although the innovations necessary to transform the Higgs’ theory into a workable innovation were vast and mysterious and magical, we have to give credit to those who do come up with the fundamental ideas because they are so rare. And in giving credit, we acknowledge their significance, but they should never, ever, ever get in the way of everyone innovating.

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

  • « Previous Page
  • 1
  • …
  • 5
  • 6
  • 7
 

Loading Comments...
 

    %d