wrong tool

You are finite. Zathras is finite. This is wrong tool.

  • Email
  • LinkedIn
  • RSS
  • Twitter

Powered by Genesis

Steve Jobs and Jeff Bezos took on the FAA and won

May 4, 2014 by kostadis roussos Leave a Comment

As I write this post during take off, I am reminded of the power of compelling and amazing products.

For the longest time, the FAA had this silly rule about electronics. Until the kindle showed up it was a minor nuisance. You learned to read your book during flights. And heck the size of the seats made it impractical to use a laptop except for business travelers.

Then the kindle showed up and all of a sudden the rule became a source of intense aggravation but heck it only affected those of us that wanted to read.

For the first time the rule became irritating.

But when the iPhone showed up the rule became unbearable.

You couldn’t play a game of words with friends, your kid couldn’t keep watching a movie and you couldn’t read a book. So we started breaking the rule all of the time.

There were moments like these you could never record.

20140503-211404.jpg

Your son clutching his toy helicopter and airplane, eyes as big as saucers staring out of airplane all because of bad bad science.

I don’t know if Amazon or Apple ever lobbied on our behalf, but I doubt it.

What took down the stupid rule was that Steve and Jeff created a product that consumers demanded to use all of the time making the rule unenforceable.

Great products can transform the world by creating a demand to overturn something stupid. Something to think about as we build stuff.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

Systems Programming is not Systems Architecture

April 27, 2014 by kostadis roussos 1 Comment

Every so often, I get pulled into a discussion about how do you identify a great systems programmer. Mostly because I hang out with other systems programmers and we’re evaluating a candidate for a job opening. And then the usual discussion about interview questions and projects emerge. In fact there is a quora question that I answered on the topic. The discussion usually devolves into a discussion of the ability to understand things like hardware software interface, kernel internals, asynchronous behavior etc.

Just recently, at a meeting in Juniper, it struck me that we never talk about the truly rare skill of systems architecture. And more importantly, how do you find and recognize that skill?

So what is systems architecture?

Systems architecture is the ability to understand the abstract system architecture of the problem, understand what kind of hardware options exist and then define a software architecture that is able to exploit the hardware in ways that add tremendous business value.

 

2014-04-27_1626

System architecture is what takes  something on the left of the picture and then defines something on the right. `

Mouthful ain’t it?

So let’s break this up a little bit.

Abstract system architecture

When you consider a system of any kind, there exists a way to describe that system that is decoupled from any implementation yet at the same time is readily recognized by experts in the art.

Let’s consider something like a file system. A trivial file system has a block virtualization layer that maps logical blocks to physical blocks, a way to organize virtual disk blocks into containers and these containers can then be organized in various interesting ways involving different hierarchies, a mechanism for writing  disk, and a mechanism of reading blocks from disk into memory.

Immediately anyone who is expert in the art, will point out a shit-load of stuff that I glossed over. And that discussion is important because system architecture is about agreeing what are the important things that are always there. And more intriguingly what things that were important that can be dropped.

But beyond just being able to articulate the abstract system architecure, you also have to have a keen insight into the relative computational complexity of the pieces. For example, how much memory and CPU does maintaining a map consume versus doing a read or write.

And beyond the computational complexity understanding what pieces must be very robust and what pieces can be less robust is also important – for example the individual blocks are unimportant unless they contain map information.

Understand what kind of hardware options exist

Given an abstract system architecture, then the next question is how to manifest that model. The truth is that systems architecture is the ultimate wrong tool problem. The perfect abstract system can not be implemented without huge compromise to business value – either in terms of performance or cost or availability.

For example, in the case of file systems, there is the choice of processor and memory and what kind of storage you will use. The more CPU and memory you have the more computation you can do per IO and the less IO you need from the underlying physical sub-system. The faster the physical storage system you have, the more performance you need out of the CPU because you have less time to do work.

Understanding the tradeoffs and trends is really important. And understanding the different kinds of options within a category is really important.

What’s not important is understanding the exact details until you get to actually building a specific instance of your architecture.

Knowing that CPU’s of type A perform 3x better than of type B and that the projected performance curve over the next 5 years (from vendor roadmaps) is crucial especially if Type A has a different architecture than Type B with different tradeoffs on how you write software.

Knowing the ratio of performance between Disk and Memory is important.

And knowing how all of these ratios work with each other is also really important.

Add tremendous business value

So I am jumping ahead because this is the most peculiar statement. The thing about abstract systems architectures is that they can be viewed as an end goal in and of themselves. And there is a lot of value in pursuing that research and understanding.

In my mind, systems is really about building something in the hear and now, and something that is built in the hear and now needs to add value in some material way. And, for better or for worse, I use the term business value to define material way.

Perhaps more prosaically, a better definition is that systems architectures that are interesting have to deliver better top-line performance that is sustainable, better availability or better price/ performance where price can now include power consumption.

Define a software architecture

Most software architectures look like a collection of boxes that have arrows that point to each other.

Fundamentally a systems software architecture is a decomposition of a software system that maps to how the hardware can be taken advantage of to deliver exceptional business value. A system software architecture will not, almost definitionally, look like the abstract system architecture of the system because the implementation in the real world requires trade offs to deliver value.

This kind of decomposition, in my mind, is the essential difference between systems architecture and user application programming. Systems architecture considers how the hardware behaves and decomposes the software to take advantage of the hardware capabilities. User application programming considers how people behave and decomposes the software around that axis.

So why is this important?

Massive revenue and value opportunities exist when you are able to take an abstract system architecture, and then define a software architecture that leverages new hardware that allows you to get a 100x improvement along some kind of business value axis.

NetApp, back in the day, is an example of such an opportunity. Hitz and others were able to see a huge opportunity around RAID and were able to articulate a software architecture that exploited RAID in a unique way that then delivered massive business value.

Not every systems architecture is that valuable, mind you, but some are.

And this skill to defining architecture is applicable to whether you’re building an OS, a radically new kind of PHP compiler or a cloud application. And this skill is, in my mind, not a skill we spend enough time defining and examining and interviewing for.

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

Failure in a packaged world

April 26, 2014 by kostadis roussos Leave a Comment

In the digital download age, failure gets deleted from a collection of web servers. And nobody bothers to record your failure, there is nothing to left of your existence.

Does anyone remember the pet socket?

Other than digital images and entries in wikipedia the failure has disappeared.

It wasn’t always that way.

1n 1982, a ten year old version of me, begged and pleaded that his mom buy him a video game based on the hit movie ET. As a proud owner of an Atari gaming station, the expensive cartridge offered a way to relive the movie experience.

I don’t remember much of the game, other than owning it and waiting in line to get it. I think that was the high point of owning the game.

Many, many years later (approximately 20 and many eons after I tossed that Atari system and the game cartridges that went along with it)… I discovered on the web that apparently the game wasn’t that successful.

Apparently I wasn’t the only person to think the game wasn’t memorable.

There was an urban legend about the cache of games sitting in the desert because no one wanted to buy them. And lo-and-behold, the truth was out there…


It also turns out that another lesson in never saying never, is that I wondered if technology archaeologists would come to exist in 100 years, well it turns out that they are already here…

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

Never say never, lessons in technology innovation

April 25, 2014 by kostadis roussos 1 Comment

Yesterday I committed the cardinal sin, assuming a problem would not be solved.

I assumed that the state of the art of the storage was:

2014-04-25_1628 magnets-00 gramophone

 

Also known as spinning magnetic rust.

And whenever you make an assumption about that you end up being screwed within 24 hours of saying it.

A buddy of mine pointed out that there is a solution.

Apparently we can use quartz to store data for 300 million years.

2014-04-25_1636

Well then.

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

Warhol and Decoding the Past

April 24, 2014 by kostadis roussos 1 Comment

The recent announcement from the Warhol archives about recovering some Amiga art

got me thinking about how miraculous this feels and I am not talking about Warhol using an Amiga, but us being able to recover the data.

When I think about signature scenes in science fiction, the activation of the ancient computer and recovery of the data from the data banks resonates.

LostcityPart111

 

The story resonates because it goes to the even deeper tale of finding ancient knowledge lost in the ground, a story that the historian in me finds particularly alluring.

As I got older, and especially after a decade at NetApp, I learned how absurd the tale was. There are several challenges in decoding data. The first is the media itself requires special devices that fail over time and are no longer built, the second is the data format itself is tied to some software that requires a special piece of hardware that may  no longer be manufactured and has failed, and finally there is the very unfortunate reality that the media itself fails over time. This doesn’t even begin to address the challenge of information encoding and decoding and, of course, encryption…

One of the things that fascinates me about the future and the cloud is that when some bad event happens, vast amounts of knowledge will be lost. What if all of science and technology abruptly vanishes in a sea of lost electrons leaving the future with nothing but documents about stuff we invented last century, that we might have destroyed because it was digitized … Who wants to keep dead tree media when you can digitize it?

And even if I don’t care about the cloud, I worry about me. Will everything about me get deleted because my credit card expired after I died?

And when I think about that some more, and I think about biodegradable infrastructures, and products, I begin to wonder, what if there was another civilization like ours, that had completely digitized their knowledge, had a biodegradable physical infrastructure and some catastrophic event occurred that only left these primitive dwellings behind?

So when I see someone pull up an old image that was stored on electronic media on a long-lost computer, I am amazed…

And then I wonder, will we have a profession of computer archaeology that emerges 100 years from now?

 

 

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

Supercomputer Markets and the PC

April 21, 2014 by kostadis roussos Leave a Comment

My first job out of school was in 1995 working for Silicon Graphics on high end supercomputers. In particular, I was working on the Irix kernel scheduler.

In fact, my first paper was about a piece of technology that now is viewed as very archaic: batch scheduling. At the time the problem was an area of active research…

Supercomputers were about to hit a brick wall that year. A combination of the end of the cold war that killed DARPA funding and the increased performance of x86 processors and networks that made clustering technologies good enough for an increasingly large share of the computational pie.

In parallel, while SGI’s server business imploded, the high end graphics business imploded as well.

The Octane was supposed to be the next generation high end workstation,

SgiOctane

only to discover that the combination of AGP and Nvidia made the value of a completely custom design… well less valuable.

1996 was the last profitable year for SGI.

One of the more vivid recollections I have about the era is the discussion of the Top 500 supercomputer sites. Folks in the supercomputer biz bemoaned that Intel would soon dominate the list with commodity computing systems… That the entire era of supercomputers with their amazing underlying technologies was about to go away.

Nearly a decade after I left SGI, I attend a conference at Vail where I heard the exact same speech.

10 years later, the top 500 sites still has a collection of eclectic system designs.

And that got me thinking about supercomputers and their markets and the business economics …

The most important part of this blog is to tell you that there are other people who have written about this elsewhere. The most famous, and the most brilliant is book is titled The Innovators Dilemma by Clayton Christensen. If you haven’t read that book, stop and go read it.

Waiting.

Did you read it?

Waiting.

Good, you’re done.

Alright.

So what is a Supercomputer market? A supercomputer market is a market where the computational requirements are increasing faster than Moore’s law or are inherently so large that conventional computing systems are too slow at present and for the foreseeable future.

A much greater Systems Architect than I described it as: The customer wants the performance to increase 4x every two years.

To deliver that kind of performance, vendors have to deliver exotic computational architectures that are at the limits of what humanity can create at this time.

The price the customer pays for that kind of horse power is determined by the business value of the problem being solved.

And as long as the customer’s computation needs remain outside of what conventional computer systems can build every subsequent generation will command about the same price or more.

The nice property of supercomputer markets is that it’s practically impossible for new entrants to compete in the space unless they can figure out how to fundamentally disrupt the incumbent. Especially if the market has been evolving for a long time.

If you had to build a new supercomputer from scratch the capital cost would be staggering, never mind the challenge of finding the brain power necessary to build it outside of the established vendors.

The amazing sweet spot for a business is when a supercomputer market is a broad market that has huge computational needs that can only be satisfied with supercomputers. Basically you are building these amazing computers that people will pay a large chunk of cash because the choice is buy them or not be in the business that requires them. And there is a lot of those people.

A key indicator of not being in a supercomputer market is if the customer doesn’t have to buy the next generation of hardware and can also remain in the business that use them. In other words, the next generation is an improvement but no longer essential

How is this different from The Innovators Dilemma?

The Innovators Dilemma focuses on individual vendors and how they get disrupted but ignores the broader market trend. In point of fact, the Innovators Dilemma focuses on how new architectures can disrupt alternative architectures while the broader supercomputer economics trend remains.

If we look at disk drives, the canonical example in the book, the Innovator’s Dilemma observes that disk drive vendors came and went. My observation is that there was a macro need for more storage, and as long as that remained true supercomputer economics would hold true. The broader supercomputer economics trend held true even as many vendors got disrupted… It was only in 1996 when capacity prices collapsed that the average computer user had their average storage needs satisfied.

My favorite example is the PC because it’s not a supercomputer 🙂

For almost 18 years, consumers would spend about 5k on a new PC, because the next generation PC was so much better than the last generation PC. The computational needs of consumers was inherently greater than what computers could deliver in 1981 and remained so until 1999. For 18 years all you had to do if you were Dell is build a computer and people would buy it because their needs were unmet with the current generation computer.

And that brings me to the problem with supercomputer markets …

It turns out that there are two kinds of supercomputer problems. The first is what I call inherently computationally hard. These problems are the kinds of problems where you are trying to simulate physical processes or dealing with hard problems and as a result are inherently computationally expensive and will remain so indefinitely. The second kind of problem is what I call capped computational problems.

And maybe this is my second insight.

A capped computation problem is one where humans are consuming the computation directly. If you are building something for people, eventually you run into that bottleneck – the human ability to perceive and interact.

Put differently, supercomputer markets can exist indefinitely as long as you are processing machine level interactions that are not gated by human processing.

So returning from orbit… 

So what ends this kind of macro trend?

  1. Solving the problem
  2. Lack of interest in the problem

Some examples of 1 include things like extreme high-end 3D graphics, and PC’s. An example of 2 is the cold war dividend when Clinton cut funding into the military and that cut funding for the purchase of supercomputers because a class of problems were simply no longer that interesting.

The problem with supercomputer markets that are constrained by human perception is that eventually the computers get too fast. And the reality is that they eventually end. This doesn’t meant that the market for the product but the very nature of the market for the product changes.

So what happens when supercomputer markets end?

Typically the incumbent vendor goes out of business super-fast. Basically no one wants to buy their next generation hardware because the last one solved the problem.

And at that point the market transitions to an enterprise market with different economics. The most important being that customers want the next generation to be twice the performance at half the cost.

Visually

2014-04-20_2254

 

What this picture tries to show is that while customer demand is unmet by the supercomputer technology, the supercomputer technology continues to thrive, selling each new generation of hardware. Once the supercomputer technology meets customer demand, then customer demand shifts to mainstream technologies.

This is not a case of mainstream getting good enough, instead its a case of the customer no longer caring for incremental improvements because the problem is solved.

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Selling

Do not move fast and break things

April 13, 2014 by kostadis roussos Leave a Comment

If you’re an engineer, the probability of understanding the life and times of folks in IT is marginal at best.

As an engineer our job is to figure out new ways of building things. We’re paid to innovate and break things. And when the things we innovate on have big outcomes we get big rewards.

IT, on the other hand, has a different function. IT’s job is to ensure that the there are standard operational systems in place that allow the business to run efficiently. In many ways, IT is not there to innovate. In many corporate environments IT exists to thwart unnecessary innovation. Essentially lines of business are allowed to innovate in their areas, but to get something deployed at scale the conservative time consuming IT processes minimize the risk of some new disruptive immature technology taking down your company.

Most technology companies refer to this as the “extended enterprise sales cycle“.

So if Facebook says their motto in engineering is “Go fast and break things”, in IT it would be slightly different:

2014-04-12_2149

If you are in IT, a way to advance your career is to advocate a new product or technology and have that product or technology work out. Using the technology becomes a bet your career kind of moment.

And if the technology doesn’t work, well that is the end of your career. You’re the douche that introduced Product A that everybody hates and everybody says should be fired. How many of us have sat around using some piece-of-shit product screaming for the head of the fool that forced us to use it? Many. Many. Many.

The old adage – no one ever got fired for buying X – is real. If you make a safe choice, and it doesn’t pan out, it’s not your fault, it’s the vendors fault. If you make a risky choice, and it doesn’t pan out, it’s your fault.

As engineers this is perplexing. The whole point of innovation is to move the ball forward so failure is normal, but IT doesn’t operate at the edge of innovation. IT must keep the lights on.

So when someone in IT bets on your technology they are betting their entire future on you.

Let’s dig into that a little bit more.

Typically you’re selling a piece of technology into a space that IT already has inadequate solutions. The natural and correct reaction of folks in IT is to:

  1. deny the problem
  2. try to solve with the existing vendors
  3. only if they are desperate try something new.

Why do they deny? 

Because introducing the new technology means disruption and change. And when you’re short on resources trying to keep the lights on, the last thing you want is disruption and change. Most of the time, people complaining about a problem just go away, so it’s best to just wait and see if this is a real issue.

As for why stick with the existing vendors?

Because the minute you introduce a new vendor there was one way of doing things, now there is two, and that means that everything gets more complicated.

Because you have to keep two sets of employees trained one for vendor A, and one for vendor B. And you can sit there and wait for a heterogenous management solution, but you are finite.

Worse new vendors have new issues. The existing vendor you understand them, they are like a partner you’re comfortable with, things might not be perfect but you understand each. The new vendor is like this exciting new partner that promises so much, but once you get past the first date you start to learn things and it will take a while to get comfortable.

If you are desperate try something new

And so when you’re desperate you will be forced to try something new. The problem will not go away, and your existing vendor can’t solve it.

The dice are rolling…

So you do extended Proof’s of Concept (PoC) to prove the new guy can’t solve it either. Because you really really want the new guy to go away so that the problem will go away.

But the new guy doesn’t fail, and the PoC was successful so you’re at the roll the dice moment.

Let’s take a step back and think about this situation: You have an acute problem that you can only solve with new technology and if the outcome is a failure it’s your fault because it was your job to make the right decision.

This is usually is a make or break your career moment.

2014-04-12_2155

If the new vendor works out, you’re a hero. If the vendor doesn’t your replacement can figure out what to do next.

And the stakes are typically not that high… it’s not like sales really needs their email or the web site needs to be up.

So what happens when your new technology fails?

Bad things.

The acute problem is still acute and needs to be solve so your boss looks for new ideas and new solutions, typically elsewhere.

It’s not bad, it’s not evil, it’s just the way the world works.

The guy who bet his career on the outcome is now wondering how long it will take for him to recover from the mess…

Some final thoughts. 

As a vendor, I remember selling a piece of technology to a customer, and the technology failing. The effect of the failure was cataclysmic, our ability to penetrate into parts of that customer’s infrastructure was permanently handed to our competitors. The fact that underlying problem existed in all vendors infrastructure was irrelevant. Our product had failed. As far as the person who had bet their career on our tech, he was never going to trust us again. I remember the anger in his voice, and the feeling of outrage.

As an engineer, I was kind-of-like: Dude get over it.

Until I was on the receiving end of such a mess.

A vendor made a good decision for them, that screwed my company over. And I can’t argue with their decision, and they were very good about telling me their decision, but the reality I was screwed, my team was screwed, and my career was hurt. The net effect of that right decision, I will never trust that vendor again. And that means I’ll buy what they have, but I won’t ever bet on them keeping their roadmap promises.

Promises we make to our customers affect their lives. And those promises when we fail to deliver can cost careers. Not ours, theirs.

At some level, you start to wonder, what does this really mean? Well if the customer is a really prominent player in an industry, and they fail with your tech, the conservative nature of IT will be to reject your tech for a good long time. The incumbent you are displacing will use that one failure to keep you out for a very very long time. Worse, the guy who took a risk, will probably tell all of his friends what he thinks about your technology. And everyone will look at this guy as a cautionary tale, that if you bet on vendor X, you too can become a bitter grumpy old dude, so better to use the standard technology. Let someone else take the risk.

As I look at the world with slightly more jaded eyes, I realize IT is Zathras. They are always confronted with the wrong tool, worse unlike Zathras they don’t understand how the tool is built and when they pick the wrong tool, they find themselves in the wrong place …

So to all the folks in IT my products ever screwed, I feel your pain and I am sorry.

 

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Selling

A relativistic theory of innovation

April 6, 2014 by kostadis roussos

One of the most irritating conversations I am dragged into, is that moment when some senior management person or some senior technology boss says:

We need to do more innovation.

My heart stops. Oh-god-no… If we’re asking the question we’re already screwed.

And Lord Help Me, but the conversation that always follows the mandate for more innovation is the debate:

What is innovation…

And then every single person in the organization becomes a theologian debating how many angels can dance on a the head of a pin… We have offsites, and training classes, and meetings and posters and metrics and …

Because, ultimately, one man’s innovations is another woman’s obviously boring idea.

And that got me thinking. What if the man and the woman are right? That innovation is actually relative to your point of view?

When I think about innovation, I think of the process of doing something new. The question then becomes, what is new?

Let’s consider three interesting edges in the world of human knowledge. One edge is the true edge of the unknown, this is the limit of what is documented and understood by any one person. Another edge is the edge of what can be commercially built. And another edge is the mainstream of what can be commercially built.

The true edge of the unknown is where academic research lives. The second edge, is where new products and businesses live. The third edge is where the vast majority of all research and development occurs.

2014-04-06_1452

The picture above shows these three edges. What it also shows is how the further you are at the limit of human knowledge the more what every else is doing appears less innovative, and the further you are towards the mainstream the more magical everything else everyone else is doing appears. And I’ll contend here that that’s the wrong view of the universe. And that a lot of our discussion about innovation is really driven by this relativistic picture. Folks on the right look towards the left and want to be more like the people on the left without realising that they don’t have to.

One of my recent, hypotheses, is that anyone who is on any one of those edges will see any new idea as obviously flowing from the context of the ideas surrounding them. The idea may be novel, but not entirely surprising. At the edge, everyone is looking at the same sets of problems and data, it’s the solutions that are unique. Every so often, there will be something that comes out of left field, but that is for the most part rare.

You don’t come up with a research idea in a vacuum. There is a field, the field has a set of problems that are of interest, and your ideas are vetted in the context of those problems of interest and the quality of the ideas you put forth.

As a concrete example, consider theoretical physics. I recently read this fascinating book by Leo Smolin that has a long diatribe about string theory but beneath that surface offers a unique perspective on how science is done. Essentially what theoretical physicists do is they come up with theories by looking at data, exploring math  as they try to make fundamental insights about the universe. Every so often an idea will have enough merit that other theoretical physicists will take that idea and refine it further, until at some point in time the refinement results in an experiment that requires even more science to do that proves the theory. The whole process can take over 50 years from start to finish.

There are three essential edges, one edge is coming up with brand new fundamental ideas of the universe, another figuring out those ideas, and another figuring out how to turn them into experiments.

So where is innovation happening? In all three areas. Every person is moving the frontier of human knowledge a little bit further.

And yet, the guy who came up with the Higgs boson, is far more well known that the vast army of people who turned that idea into an experiment that could verify the result. I remember hearing an interview with the director of the experiment who had to explain to the NPR reporter how much science and engineering had to be created just to validate that theory.

So is Higgs the only innovator or is everyone else in between an innovator as well?

And that got me thinking that it’s all relative. Everyone is innovating at every step of the way, and our reward and star system makes it impossible for us to recognize all of those incremental innovations. We reward the guy with the fundamental idea with a Nobel prize and ignore everyone else who turned it into something real.

But so what? Well the reason this matters to me is that when my boss brings up the dreaded We need more innovation discussion, I worry that his idea is we need more Higgs’… And that by creating that bar for innovation we’ll essentially tell everyone that unless they come up with new fundamental ideas they don’t have anything of any value…

And so coming back to my discipline, I think of Google. Google is about a decade ahead of everyone else in data center infrastructure. You read their papers and think, my God they are magicians, and to a certain extent they are. More importantly they are also at the edge of what is commercially build-able and they, uniquely, have access to the data and problems that are worth solving in 10 years, and so are able to solve them first. For the rest of us struggling in the relativistic past, it can feel like we’re in the stone age, being blessed by these aliens with gifts from the future. But really what’s going on is that they are just seeing the problems before the rest of us have a chance.

And so if you’re in the same space as Google, you can feel worthless because you’re not innovating like them. They publish papers and industries emerge, you’re just trying to keep some MySQL DB’s from crashing over…

But I wonder, if at Google they feel the same way I do … I wonder if at Google the solutions are considered novel, and ingenious but not miraculous. Perhaps like Arthur C. Clarke said their sufficiently advanced technology appears like magic …

So then coming down to next edge which is the mainstream world of technology where you are not at the bleeding edge of the known or the commercially viable, is innovation possible?

And I think it is. Because innovation is about looking at data and the problems worth solving and solving them in a way that is novel. Now the novelty may not be that novel because you’re taking something someone else did and adapting it to your context, but that adaptation requires thought and some novel ideas.

But then isn’t that what almost all innovation is? Taking some ideas from some place, looking at some data, considering what the important problems are and trying to do something new?

So then if that’s the case, then the answer to my boss is relatively simple:

Yeah, I agree. Let’s look at some data, let’s look at the problems of interest and let’s try to solve them in some way that is different than how we were solving them yesterday.

And to my current boss’ credit, that’s where we landed. So if you’re job is to stop the MySQL DB’s from crashing and come up with a way of doing it that was better than the way things were working a day ago, then you just innovated and that’s cool.

But I can almost hear the voice of some smart engineer who says: but that’s not real innovation… real innovation is X or Y or Z … and I respond:

What is innovation is relative, and you can innovate on all edges and the only real innovation is the process I just described.

But I am a little bit dissatisfied with that answer. So I will amend it here.

Here’s what I will concede, although the innovations necessary to transform the Higgs’ theory into a workable innovation were vast and mysterious and magical, we have to give credit to those who do come up with the fundamental ideas because they are so rare. And in giving credit, we acknowledge their significance, but they should never, ever, ever get in the way of everyone innovating.

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

You are finite, Zathras is finite. This is wrong tool.

March 31, 2014 by kostadis roussos

Welcome, reader, to my tech blog.

The title of my blog is taken from a scene in Babylon 5, a great little bit of television.

Let me set the stage, the protagonists of the show are caught in a temporal crisis, and the equipment they need to escape is busted.

Zathras

Zathras, the engineer, who is supposed to fix the broken equipment is frantic. Ivanova, another character and in this scene feels like management, pesters him to hurry up because they are running out of time. Zathras is irritated and tells her:

      Can not run out of time, there is infinite time. You are finite, Zathras is finite. This is wrong tool.

At some level, this is the essence of the challenge of engineering. There is enough time to do anything, the problem is that engineers don’t have enough time to do everything. And we are always confronted with trying to do something with the wrong tool. Trapped in a finite body we are confronted with the reality of lacking the tools to make our visions real.

This blog will be devoted to writing about technology topics in a way that is somewhat philosophical and meandering and attempts to see patterns or understand events that have happened. There will be a historical bent to this blog. I like to think about the past and understand the past and see how it can explain the present. I will combine things from random parts of the industry because I like to think about the whole industry.

If you’re looking for musings on the state of platforms and technology and trends, this may be an interesting place for you to stick around. There will be some focus on the nexus of business and technology, but I am not a businessman, and won’t spend too much time there.

If you are looking for deep detailed analysis of some specific piece of technology this will be a disappointment.

At the end of the day, my focus will be on trying to understand why we are using the wrong tool and the consequences of using the wrong tool.

Sometimes, I might even talk about the right tool…

2014-03-30_1838

In that wistful way engineers talk about the right tool, like a unicorn just past the end of the rainbow.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: About the blog

  • « Previous Page
  • 1
  • …
  • 19
  • 20
  • 21
 

Loading Comments...
 

    %d