wrong tool

You are finite. Zathras is finite. This is wrong tool.

  • Email
  • LinkedIn
  • RSS
  • Twitter

Powered by Genesis

It’s not physics but reasoned arguments

June 21, 2014 by kostadis roussos Leave a Comment

In a recent meeting, a hardware engineer turned and expressed his frustration at the state of a software stack. The frustration centered on the unbelievable complexity the software had accumulated over the years. There was a feeling that somehow, somewhere the software team had lost its way.

An attempt was made to make analogy to hardware. The claim was that hardware that tries to fight physics inherently fails to work. That physical laws constrain design and designs that fight those laws end up failing. And that therefore there were certain physical laws our software had violated and that as a result had become so complex.

In particular an appeal was made to modularity and strong API boundaries as the key to great software. That to not have those things, meant that we were breaking some laws of physics or their nearest software equivalent.

And it got me thinking. A lot. It was an interesting perspective. In all my discussions about software with software guys, the idea that physics would have anything to do with anything never came up.

Over the last month, I came to the conclusion that the hardware engineer was correct about the need to follow some principles but those principles were not physical laws.

All you have to do is read is this paper: Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs by John Backus to realize that software does not view physics or the underlying hardware as its master. In fact Backus calls on software practioners to create software that will force hardware people to create new kinds of hardware:

There are numerous indications that the applicative style of programming can become more powerful than the von Neumann style… when these models and their applicative languages have proved their superiority over conventional languages will we have the economic basis to develop the new kind of computer that can best implement them. Only then, perhaps, will we be able to fully utilize large-scale integrated circuits in a computer design not limited by the von Neumann bottleneck

Physics is the thing that gets in the way of our software, the thing that enables us to execute our software in the real world, but not our master.

Instead, I would argue that the laws that software ignores at it’s peril is the structure of reasoned arguments.

What do I mean? Software has the wonderful property that it’s correctness is formally unknowable. We can’t know if it is correct.  And if we can’t know, what do we do?

All we can do is reason about the software. And to reason about software is to try and understand an argument.

If that is the case, then an argument that is well crafted is:

  1. Clear
  2. Concise
  3. Structured
  4. Has a general structure based on specific facts
  5. Complete
  6. Novel
  7. Builds on previous arguments

All attributes of good software are actually the attributes of good reasoning.

As an argument acquires complexity it becomes harder to follow, harder to re-use, harder to understand, harder to leverage, harder to understand it’s correctness.

And that fact, perhaps, explains why software is so hard. Good software reflects the quality of our ability to reason and articulate our reasoning.

Things like modularity and APIs are really techniques to construct good arguments.

Our software mess wasn’t because we had failed to followed some laws of physics, it’s because we had chosen to become sloppy in our thinking and our reasoning.

What does this mean, practically?

As we try and struggle on how to make software better, what we don’t talk about is the need to improve our engineers ability to reason. Our schools focused on mechanisms and techniques spend very little time on specific training on how to argue.

Perhaps, there is something we need to learn from the law. Unclear.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

Caribbean Cruise Lines and the Turning Test

June 5, 2014 by kostadis roussos Leave a Comment

When the Turning Test was first posited, we viewed that as a terrifying and beautiful milestone. Today I see that it was in fact an evil milestone: when computers are able to pass the Turing Test, computers will be used to sell empty seats on the Caribbean Cruise lines.

I got a robocall where the caller was a robot that responded to my questions about cruises before I figured out it was a machine.

2014-06-05_1138

 

The robot call didn’t immediately start spewing about cruises, instead it waited for me to respond. And then the response was contextual… Only when I remembered the stupid robot voice did I realize what was going on.

The Turing Test is an attempt to articulate when a computer is intelligent without trying to understand if a computer is intelligent. The goal was to sidestep the philosophical debates that made the engineers trying to build stuff cry.

The hope was that an intelligent computer could make better decisions than machines because it could make decisions faster with more information in the best case, and in the worst case we could have infinite free labor to free man to have more time for himself.

Except we, computer scientists, lacked vision.

Like the folks in computational photography believed that unless you could make pictures more realistic, there was no point in the technology.

ig-logo

 

I mean who wants a picture that has been made to look worse….

AI missed its true calling. The Turning Test is really about sales people calling you 24×7 to irritate you with offers you don’t want.

 

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

How Firewalls Killed Zynga or Why the Mobile Internet Took Off in 2012

May 25, 2014 by kostadis roussos 7 Comments

In life, when confronted with a mystery I always use this principle:

Occam’s conspiracy razor: Never ascribe to malice when stupidity will suffice. 

In 2012, Zynga experienced a rapid decline in numbers while at the same time mobile gaming and mobile traffic took off. The expectation that mobile would take over wasn’t a surprise. We all knew it was going to happen, what was surprising was how fast web based browsing dropped.

Mobile was going to be big, but why was it at the price of web? Why did the web die out?

Occam’s conspiracy razor was that people just wanted to move to mobile, and that 2012 was the year it happened. There was no conspiracy, no external event, just a mass migratory movement.

Early 2012

In early 2012, as I looked over the landscape of mobile internet, my reaction was that this was going to be a platform that added more people, not took away from the web. Adults would both  use the mobile device and the desktop to do all of their activities. Most people, sitting in front of computers during working hours, would play their games, do their Facebooking, do their shopping on their computers while working. Later in the day, when they went home they would use their mobile devices on the couch while they hung out with their families and friends.

That. Did. Not. Happen.

At the end of 2012, we could confidently call the death of the desktop web. Desktop DAU for Zynga cratered, and Facebook web DAU had flatlined.

What happened?

What happened was that while Zynga and Facebook were creating the ultimate form of escapist fun for the masses, corporate America noticed.

And what they noticed was that web-surfing that had been an annoying tax on employee productivity was becoming a massive time sync. Employees were playing their Zynga games, sharing on Facebook with their family and friends, instead of doing theirs jobs and incidentally consuming vast amounts of bandwidth. Later Netflix entered the picture and the amount of lost productivity and bandwidth was getting serious.

Before Facebook, social non-professional interaction was hidden in corporate and private email or the phone. The water cooler, the smoke break, the lunch room, the break room where we hid minutes of wasted time every day.

Before Zynga, it was really hard to play a game on a corporate laptop. The security teams locked computers down so tight, that nothing could be installed. And to be honest, you’re not going to take a 5 minute break to play some Call of Duty. Most employees played Minesweeper because that was the only game they could install.

So lost productivity was visible but unblockable.

Enter Zynga. Farmville at it’s peak had 30 million DAU. That’s an insane number for a game. 30 million people were not playing at home. Peak DAU hours were during the working day.  Corporate America noticed.

At the same time as Facebook and Zynga were taking over the world, ng-fw‘s (next generation firewalls) came into existence. Their claim to fame was that they could identify the applications and then apply security policies at the application layer.

Apparently, as I recently discovered, for the last several years the basic pitch of an ng-fw begins with:

How do you stop your employees from playing Farmville?

Obviously it’s more nuanced … Not really. I mean sales guys as recently as last week positioned ng-fw as how to block people from playing Farmville. The thing was that corporate america figured out how to control what applications their employees were using, even when they were on the web.

Managers wanted to stop their employees from goofing off, and since employees were using communication channels that the managers could choke, management did.

What about SSL? Well it turns out that the firewall vendors figured that out as well. The firewalls would terminate the SSL connection to the external site, and then in turn re-sign the certificate. The IT teams would helpfully disable the warnings associated with the re-signed certificate making people oblivious to the fact that their traffic was being man-in-the-middled.

Net effect, throughout corporate America, firewalls were quietly and silently blocking access to consumer web-sites that employers felt were not really related to work.

So what does this have to do with Zynga?

At the same time, at Zynga I was observing this really weird unexplained phenomenon.I was  seeing evidence of people trying to start our games and failing. Stuck in the cauldron of Zynga, we had no insight into what was going on.

We assumed, applying Occam’s razor, that people were starting to load the game and then quitting. The sheer scale of attempts was mystifying, but we assumed that people had their reasons.

I mean, how else could games be blocked?

Last week I went to Hawaii and experienced ng-fw first hand.

1. Sometimes I couldn’t load the game.

2. Sometimes I could load the game but then the connection would be dropped.

And I realized that our customers had been blocked from accessing our games.

And then it clicked. All of those mysterious reports, all of those users who couldn’t get to our games, all of those bizarre late night sessions trying to understand what the hell was going on, and we missed the most obvious thing of all:

Employers didn’t think their employees should be playing games at work and were doing something about it.

So why was this bad for Zynga?

Let me caveat the next bit with the following: Mark Pincus never accepted the idea that Zynga was a victim. We didn’t fail because other people did things to us, we failed because we didn’t execute because we didn’t deliver. And I agree. If we failed, it’s because we failed not because other people screwed us.

Let me also caveat, that this is pure speculation. I don’t have any data to back any of this up. At the time I didn’t know what to collect, and now I don’t have access. 

DAU and Engagement

Spend enough time on the Facebook platform, and you know that the only thing that matters is engagement. Facebook wants people to be engaged with their product, they’ll flood you with users to see if they’ll stick, but if you don’t keep those users be afraid, because eventually they’ll point those users somewhere else.

Firewalls were breaking Zynga’s engagement in two ways.

The first was that users were unable to get to the game. With games that had a plant-harvest cycle, if you can’t make it into the game to harvest, then you’re likely to quickly give up with the game itself.

The second was that users were conditioned to stop clicking on our feeds and virals. If clicking on the feed or viral would result in nothing because of firewalls, people stopped clicking on them.

As a result Facebook was seeing a decline in engagement in Zynga games. And because they care about their users, first and foremost, they started to point their users away from our games. And good for them.

Unfortunately this fed a vicious cycle of decline. The more our users left, the more we tried to reach them through virals, the more Facebook users got annoyed with those virals, the more Facebook throttled our virals. And because Facebook was always looking for some other content they could get people engaged with, Facebook pointed the firehose of users elsewhere.

Enter mobile

If there was no mobile platform, I posit Facebook would have seen a flatlining of users as did Zynga. Instead what happened was that people discovered the one device that had a network their corporate bosses could not control, their mobile phone.

With their access to Facebook blocked, employees discovered what teenagers have known since forever, mobile phones are the only way to talk to your friends without anyone getting in the way.

Much like teenagers who used SMS as a cheap way to talk forever with their friends, adults reached out to the mobile device as a cheap way to keep connected once their internet access got blocked.

And mobile Facebook traffic skyrocketed.

And once everyone got used to using their mobile device instead of their corporate laptop to connect with the Internet, the rest was history.

The transition was inevitable, and firewalls forced the transition to happen ridiculously fast in 2012.

Some final thoughts

When I look back at my own mistakes, what I missed about mobile was not the form factor but the access to the users.

At the end of the day, consumer websites live and die by their ability to reach end-users. And unfortunately, the last mile is controlled by corporate entities that may have a dim view of people goofing off on the job.

Mobile is the future because it represents the only reliable way for consumer businesses, net-neutrality excepting, to reach their customers without some intermediary blocking them.

What I missed at Zynga wasn’t mobile, it was the fact that only on the mobile platform could we be guaranteed access to our users.

 

 

 

 

 

 

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Facebook, innovation, Security, Selling, Zynga

Voodoo Debugging in the Age of the Cloud

May 21, 2014 by kostadis roussos Leave a Comment

Software engineers, for all of their deep rational view of the universe, tend to sometimes approach debugging like soothsayers inspecting entrails trying to divine what the hell is going on.

 

The scene played out like this:

Bug report: XyZZy is happening

Engineer stares at bug report, stares at software, stares at test case that doesn’t reproduce it.

Engineer then tries things that make sense.

Bug doesn’t go away

Then Engineer tries random acts of coding to see if the bug goes away.

And his peers start saying: Wow, yeah I think I saw something like that. Did you try the toggle the frommlebit?

And the engineer says: But this has nothing to do with the frommle bit, this is tied to pluxion submodule.

And another peer says: Trying adjusting the memory size of the core structure.

And the engineer says: But my code never touches that core structure!

And another peer says: We’ve had success with this kind of problem that we don’t understand when we altered the return value of the function..

And the engineer says: But this makes no sense.

And they all respond like a Greek chorus: But it sometimes works.

At this point in time the engineer is hoping and praying that just moving the code around will cause the bug to go away.

My buddy and I used to call that last stage of debugging “Voodoo debugging”.

voodoo debugging

You don’t understand the problem, you don’t understand why the bug is happening, you have a deadline to meet, so you try random things hoping that the bug will just go away.

And the sad truth is that the simple act of moving things around may cause the bug report to go away. The bug may reappear later in some other poor bastards bug queue. And the Voodoo debugging will continue as people try this and try that.

In the pre-cloud era of computing, when software engineers believed that it was their divine right to understand how it all fit together, this was viewed as a moral failing. Back in the day we could understand how the entire system from the silicon up worked and we could use that information to thoroughly debug and understand the problem.

Those who lacked the moral fiber and intellectual rigor to do real software engineering practice Voodoo debugging. The rest of us understood how the entire system fit together and resolved the issue down to it’s root cause.

Then I moved to Zynga. And at Zynga it became apparent that Voodoo debugging wasn’t just a product of a lack of moral fiber or intellectual rigor it was a rational response to the environment.

I talk about this a lot in an answer on Quora. The essential elements of the arguments are that systems are in such continuous flux that understanding how the system is working in its entirety is impossible and worse by the time you understand it, the system has changed.

Bugs come and bugs go. And bugs have an element of Voodoo magic about them. Trying to understand bugs and resolving them is impossible. We are now in the era of trying to resolve issues without fixing them. We have become doctors who offer palliatives rather than cures in the hope that the problem will just go away.

And so I embrace my inner Voodoo doctor. I will try the magic cants, I will pray that the next update of the API I am using fixes the problem, and I will internalize the truth that finding the root cause is an illusion my earlier me had.

 

 

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

Joe Thornton and Smart Systems

May 11, 2014 by kostadis roussos Leave a Comment

On Dec 1st, 2005, the Boston Bruins made a catastrophically piss poor decision to trade Joe Thornton to the San Jose Sharks. As a result of that trade, San Jose became a perennial contender. Boston did eventually become a great great team, but the trade of Thornton could only be described as the agonizingly slow route.

The rationale for the trade was the belief in the gutless Thornton mystique, a story created from the nonsense of his first foray into the playoffs when with a broken rib he was only able to get one measly assist in a 7 game series loss to the Montreal Canadiens.

Over the years I have seen Joe Thornton carry mediocre San Jose teams to the playoffs only to be continuously thwarted by Doug Wilson’s inability to get a decent third line…

Or so I think …

My point in all of this, is that Joe Thornton, to anyone who knows hockey, is most definitely not a Bruins player…

Unfortunately Facebook’s algorithms are a little bit less aware of the NHL and hockey… and produced this trending result:

Facebook 

  • Joe Thornton: Bruins’ Thornton fined $2,800 for squirting water

And herein lies the danger of computers and their lack of semantic awareness. They create such delightful moments of absurdity. The algorithms confused Joe Thornton with Scott Thornton.

Without any knowledge of the internals of the implementation, I could suppose that Facebook used the fact that I post stuff about Joe Thornton, including posts I have made about how he once played for the Bruins to surface a note about Scott.

This is the inherent danger in an era of big data. We have all of this data that we can then use clever algorithms to create interesting and useful results that we can immediately share with the world. And most of the time the results are good.

The problem arises when the result is not good

And it’s not that computers are less reliable than human beings, humans are just as prone to absurd errors, the problem with computers is how we humans are interpreting the results. Or choosing to share the results without interpreting them because they are correct most of the time.

The key difference with the past is that the computers had access to less data, so our ability to find relationships where none existed was limited. We knew that the data was too small to trust any result that did not fit our intuition.

But now, with so much data, who is to say that the relationship doesn’t exist?

Perhaps our clever algorithms did find relationships that were only possible because of the amount of data we had collected.

And so we need to be careful and humble as we look at the results. Sometimes we will find things in the data that we didn’t know existed, and sometimes we might be looking at the night sky and seeing a horse in the stars.

Because, I can assure you that Joe Thornton is both not a Bruins player and not playing hockey right now.

And as I review my post, it’s tempting to ask myself – maybe the data is telling me a truth that Joe Thornton is not as great in the playoffs as I think he is. Maybe I need to listen the fact that he was -6 and scored only 3 points  this year and that that had something to do with the performance of the Sharks. Maybe. Or not.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

Big steak vs Tasting menus and the difference between storage and network system architecture

May 4, 2014 by kostadis roussos Leave a Comment

As a lover of fine cuisine, I am always struck at the difficulty of choosing between a tasting menu and a massive steak. The tasting menu is about sampling a little bit of a lot of things. The chef is trying to maximize the number of unique flavors over calories. Whereas when you eat a steak you are trying to maximize the number of steak calories.

Tasting menu is about doing a lot of little things.

Steak is about doing a lot per thing.

And that is a very crude analogy about the difference between storage and networking.

If we consider a packet a unit of work, storage systems expend a lot of resources per packet as compared to network devices and so process vastly fewer packets.

Storage system architecture must optimize the use of very slow devices. The slower the device the more computation value you get out of clever algorithms on a per packet basis if they can avoid going to disk. So storage uses complex in-memory data structure with complex algorithms that blow out data caches because storage systems have to. The reason flash was so disruptive to the storage industry is because the amount of computation per packet became less as the performance of flash closed the CPU vs storage performance gap.

Network system architecture must, on the other hand, do the minimal amount of work because it is inherently slowing down the flow of electrons. A beam of light down a cable that hits a network device feels like it went from light speed to impulse speed. As a result networking systems must do heroics to minimize the amount of computation and simultaneously make the computation go as fast as possible.

Practically what this means is that a network system can benefit hugely from custom specialized computational hardware whereas storage can not.

In English: ASIC for networking making sense, no one uses anything but Intel CPU for storage. And every company that tried to use custom hardware in
storage.

This answers a long standing question I had, as to why storage companies never used custom computational hardware and networking companies did.

It also answers why server companies go after storage and not networking: the hardware server companies build is more aligned to a storage problem than a networking problem. For example Sun and Dell and HP went after storage in a way they never went after networking.

This also answers the mystery Cisco. Much like Intel won the server CPU war, a single dominant player can emerge in networking given the need for custom hardware. At some point in time only a small number of players can play the game.

And last it answers the software defined storage mystery. In software defined networking we move control out of the hardware device. Software defined storage just says I can run control and data path code on any piece of hardware… This is an important distinction that is not obvious.

Coolness.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

Steve Jobs and Jeff Bezos took on the FAA and won

May 4, 2014 by kostadis roussos Leave a Comment

As I write this post during take off, I am reminded of the power of compelling and amazing products.

For the longest time, the FAA had this silly rule about electronics. Until the kindle showed up it was a minor nuisance. You learned to read your book during flights. And heck the size of the seats made it impractical to use a laptop except for business travelers.

Then the kindle showed up and all of a sudden the rule became a source of intense aggravation but heck it only affected those of us that wanted to read.

For the first time the rule became irritating.

But when the iPhone showed up the rule became unbearable.

You couldn’t play a game of words with friends, your kid couldn’t keep watching a movie and you couldn’t read a book. So we started breaking the rule all of the time.

There were moments like these you could never record.

20140503-211404.jpg

Your son clutching his toy helicopter and airplane, eyes as big as saucers staring out of airplane all because of bad bad science.

I don’t know if Amazon or Apple ever lobbied on our behalf, but I doubt it.

What took down the stupid rule was that Steve and Jeff created a product that consumers demanded to use all of the time making the rule unenforceable.

Great products can transform the world by creating a demand to overturn something stupid. Something to think about as we build stuff.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

Systems Programming is not Systems Architecture

April 27, 2014 by kostadis roussos 1 Comment

Every so often, I get pulled into a discussion about how do you identify a great systems programmer. Mostly because I hang out with other systems programmers and we’re evaluating a candidate for a job opening. And then the usual discussion about interview questions and projects emerge. In fact there is a quora question that I answered on the topic. The discussion usually devolves into a discussion of the ability to understand things like hardware software interface, kernel internals, asynchronous behavior etc.

Just recently, at a meeting in Juniper, it struck me that we never talk about the truly rare skill of systems architecture. And more importantly, how do you find and recognize that skill?

So what is systems architecture?

Systems architecture is the ability to understand the abstract system architecture of the problem, understand what kind of hardware options exist and then define a software architecture that is able to exploit the hardware in ways that add tremendous business value.

 

2014-04-27_1626

System architecture is what takes  something on the left of the picture and then defines something on the right. `

Mouthful ain’t it?

So let’s break this up a little bit.

Abstract system architecture

When you consider a system of any kind, there exists a way to describe that system that is decoupled from any implementation yet at the same time is readily recognized by experts in the art.

Let’s consider something like a file system. A trivial file system has a block virtualization layer that maps logical blocks to physical blocks, a way to organize virtual disk blocks into containers and these containers can then be organized in various interesting ways involving different hierarchies, a mechanism for writing  disk, and a mechanism of reading blocks from disk into memory.

Immediately anyone who is expert in the art, will point out a shit-load of stuff that I glossed over. And that discussion is important because system architecture is about agreeing what are the important things that are always there. And more intriguingly what things that were important that can be dropped.

But beyond just being able to articulate the abstract system architecure, you also have to have a keen insight into the relative computational complexity of the pieces. For example, how much memory and CPU does maintaining a map consume versus doing a read or write.

And beyond the computational complexity understanding what pieces must be very robust and what pieces can be less robust is also important – for example the individual blocks are unimportant unless they contain map information.

Understand what kind of hardware options exist

Given an abstract system architecture, then the next question is how to manifest that model. The truth is that systems architecture is the ultimate wrong tool problem. The perfect abstract system can not be implemented without huge compromise to business value – either in terms of performance or cost or availability.

For example, in the case of file systems, there is the choice of processor and memory and what kind of storage you will use. The more CPU and memory you have the more computation you can do per IO and the less IO you need from the underlying physical sub-system. The faster the physical storage system you have, the more performance you need out of the CPU because you have less time to do work.

Understanding the tradeoffs and trends is really important. And understanding the different kinds of options within a category is really important.

What’s not important is understanding the exact details until you get to actually building a specific instance of your architecture.

Knowing that CPU’s of type A perform 3x better than of type B and that the projected performance curve over the next 5 years (from vendor roadmaps) is crucial especially if Type A has a different architecture than Type B with different tradeoffs on how you write software.

Knowing the ratio of performance between Disk and Memory is important.

And knowing how all of these ratios work with each other is also really important.

Add tremendous business value

So I am jumping ahead because this is the most peculiar statement. The thing about abstract systems architectures is that they can be viewed as an end goal in and of themselves. And there is a lot of value in pursuing that research and understanding.

In my mind, systems is really about building something in the hear and now, and something that is built in the hear and now needs to add value in some material way. And, for better or for worse, I use the term business value to define material way.

Perhaps more prosaically, a better definition is that systems architectures that are interesting have to deliver better top-line performance that is sustainable, better availability or better price/ performance where price can now include power consumption.

Define a software architecture

Most software architectures look like a collection of boxes that have arrows that point to each other.

Fundamentally a systems software architecture is a decomposition of a software system that maps to how the hardware can be taken advantage of to deliver exceptional business value. A system software architecture will not, almost definitionally, look like the abstract system architecture of the system because the implementation in the real world requires trade offs to deliver value.

This kind of decomposition, in my mind, is the essential difference between systems architecture and user application programming. Systems architecture considers how the hardware behaves and decomposes the software to take advantage of the hardware capabilities. User application programming considers how people behave and decomposes the software around that axis.

So why is this important?

Massive revenue and value opportunities exist when you are able to take an abstract system architecture, and then define a software architecture that leverages new hardware that allows you to get a 100x improvement along some kind of business value axis.

NetApp, back in the day, is an example of such an opportunity. Hitz and others were able to see a huge opportunity around RAID and were able to articulate a software architecture that exploited RAID in a unique way that then delivered massive business value.

Not every systems architecture is that valuable, mind you, but some are.

And this skill to defining architecture is applicable to whether you’re building an OS, a radically new kind of PHP compiler or a cloud application. And this skill is, in my mind, not a skill we spend enough time defining and examining and interviewing for.

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

Failure in a packaged world

April 26, 2014 by kostadis roussos Leave a Comment

In the digital download age, failure gets deleted from a collection of web servers. And nobody bothers to record your failure, there is nothing to left of your existence.

Does anyone remember the pet socket?

Other than digital images and entries in wikipedia the failure has disappeared.

It wasn’t always that way.

1n 1982, a ten year old version of me, begged and pleaded that his mom buy him a video game based on the hit movie ET. As a proud owner of an Atari gaming station, the expensive cartridge offered a way to relive the movie experience.

I don’t remember much of the game, other than owning it and waiting in line to get it. I think that was the high point of owning the game.

Many, many years later (approximately 20 and many eons after I tossed that Atari system and the game cartridges that went along with it)… I discovered on the web that apparently the game wasn’t that successful.

Apparently I wasn’t the only person to think the game wasn’t memorable.

There was an urban legend about the cache of games sitting in the desert because no one wanted to buy them. And lo-and-behold, the truth was out there…


It also turns out that another lesson in never saying never, is that I wondered if technology archaeologists would come to exist in 100 years, well it turns out that they are already here…

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

Never say never, lessons in technology innovation

April 25, 2014 by kostadis roussos 1 Comment

Yesterday I committed the cardinal sin, assuming a problem would not be solved.

I assumed that the state of the art of the storage was:

2014-04-25_1628 magnets-00 gramophone

 

Also known as spinning magnetic rust.

And whenever you make an assumption about that you end up being screwed within 24 hours of saying it.

A buddy of mine pointed out that there is a solution.

Apparently we can use quartz to store data for 300 million years.

2014-04-25_1636

Well then.

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

  • « Previous Page
  • 1
  • …
  • 4
  • 5
  • 6
  • 7
  • Next Page »
 

Loading Comments...
 

    %d