wrong tool

You are finite. Zathras is finite. This is wrong tool.

  • Email
  • LinkedIn
  • RSS
  • Twitter

Powered by Genesis

Rethinking the Internet of Things

July 11, 2014 by kostadis roussos 3 Comments

Over the last month I’ve been struggling with the Internet of Things. My scale has an internet connection, my TV has an internet connection, my toe has an internet connection and soon my watch will have an internet connection and managing the WIFI passwords and connectivity was a Pain-in-the-Ass.

I kept telling my wife that we need a better solution.

Turns out two very interesting technology trends are going to radically remake the internet of things.

The first is that 4G LTE chips are really cheap. For those not in the know, it used to be the case that building a chip that could do cellular was black-magic, but with 4G LTE this is no longer the case. Thus the how do I connect to the internet is really a how do I make a cellular call and that’s a much simpler user experience.

The second is that BlueTooth LE is transformative. If you think about the internet of things, the things are actually generating a small amount of data very frequently or infrequently. For devices that don’t have enough power to justify placing a 4G LTE chip, BlueTooth LE is a really interesting alternative. BlueTooth LE, if you believe the marketing buzz, will allow a device to transmit longer than the life time of the battery without requiring a battery replacement. Given the range of BlueTooth LE and the existence of the ultimate BlueTooth LE receiver – your cell phone, I can totally imagine a combination of BlueTooth LE talking transparently to your cell phone and your cell phone talking transparently to the internet to transmit the data.

The core objection I had to Internet of Things seems to be addressable with existing technology that is going to market right now.

Cool.

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

Synchronicity

July 10, 2014 by kostadis roussos Leave a Comment

For many different reasons, I finally got around to reading Beautiful Code.

Because Bryan Cantrill happened to be a classmate of mine, I started with his chapter.

The synchronicity is that in 1999 he was working on a problem related to priority inversion on Solaris. Intriguingly, at the same time, a friend of mine at SGI was also working on a similar problem related to priority inversion.

In effect, both companies had decided – independently – to build real time kernels and had independently ran into similar, although very different problems. At SGI we had to figure out to make reader-writer locks deal with priority inversion while not crippling performance.

What made this fascinating is that I tend to agree with Bryan when he talks about software being like math.

 

135651

 

Although I dread using that analogy because real mathematicians would raise an eyebrow….

The more important point is that once you decide to tackle an area in software, the reality is that the solutions tend to have the same attributes and the challenges tend to be the same, what makes software engineering is that the actual solutions differ greatly because they are embodied in a specific implementations.

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

Lessons on Mobile Gaming and Cross platform

July 7, 2014 by kostadis roussos 3 Comments

We have a skype chat where a whole bunch of the ex-Zynga senior technologists (we called them CTO’s) get to chat about stuff we learned.

One thing that came up was our attempts to come up with web-mobile-backend cross platform technology. As a key driver of this effort, I learned that this was a misguided attempt.

Because I like the immediacy of the dialog I am including the skype chat:

[] kostadis roussos: if I was going to enumerate my biggest professional mistake it would be to completely misunderstand the mobile gaming market need for specialized gaming engines targeted at specific games.
[] kostadis roussos: The reality was that I didn’t understand the importance of having a hyper-optimized game for a specific platform and that it was more important to have the BEST game implementation than leverage.
[] kostadis roussos: That leverage was basically unimportant – as a gaming company – … What was important was delivering the best possible user experience.
[] XXX: it’s a fundamentally hard problem to solve and i think what I took away is being polyglot is preferable
[] XXX: it seems like you’re really just talking about the failures of flash/html5 on mobile, not so much a “specialized gaming engine” since cocos2d / unreal / unity are all cross platform
[] kostadis roussos: @XXX – intriguingly in 2013 when we did a whole bunch of analysis 8/10 games had their own custom engines.
[] kostadis roussos: Maybe the data has shifted – and I’ll accept that notion but you really need to have a highly customizable engine where you are not giving up any performance or user-experience .
[] kostadis roussos: My point is that I completely misunderstood the centrality of that point. I didn’t get that we need to think of a game as CMS with a very optimized and specialized “rendering” system and that we needed to have the best one for any game genre or we would lose.
[ kostadis roussos: And so I pursued – html5 as a solution, flash as a solution  and then PlayScript language we built – all of these were attempts to solve the wrong problem. We should have been building the best UX system for I/E games in native languages with some C++ for the business logic bits that could be shared across multiple platforms.
[] kostadis roussos: Oh well. Lesson learned.

My only personal consolation was that I gave the prezi guys some good advice based on my learnings. They feel good about the outcome.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Zynga

It’s not physics but reasoned arguments

June 21, 2014 by kostadis roussos Leave a Comment

In a recent meeting, a hardware engineer turned and expressed his frustration at the state of a software stack. The frustration centered on the unbelievable complexity the software had accumulated over the years. There was a feeling that somehow, somewhere the software team had lost its way.

An attempt was made to make analogy to hardware. The claim was that hardware that tries to fight physics inherently fails to work. That physical laws constrain design and designs that fight those laws end up failing. And that therefore there were certain physical laws our software had violated and that as a result had become so complex.

In particular an appeal was made to modularity and strong API boundaries as the key to great software. That to not have those things, meant that we were breaking some laws of physics or their nearest software equivalent.

And it got me thinking. A lot. It was an interesting perspective. In all my discussions about software with software guys, the idea that physics would have anything to do with anything never came up.

Over the last month, I came to the conclusion that the hardware engineer was correct about the need to follow some principles but those principles were not physical laws.

All you have to do is read is this paper: Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs by John Backus to realize that software does not view physics or the underlying hardware as its master. In fact Backus calls on software practioners to create software that will force hardware people to create new kinds of hardware:

There are numerous indications that the applicative style of programming can become more powerful than the von Neumann style… when these models and their applicative languages have proved their superiority over conventional languages will we have the economic basis to develop the new kind of computer that can best implement them. Only then, perhaps, will we be able to fully utilize large-scale integrated circuits in a computer design not limited by the von Neumann bottleneck

Physics is the thing that gets in the way of our software, the thing that enables us to execute our software in the real world, but not our master.

Instead, I would argue that the laws that software ignores at it’s peril is the structure of reasoned arguments.

What do I mean? Software has the wonderful property that it’s correctness is formally unknowable. We can’t know if it is correct.  And if we can’t know, what do we do?

All we can do is reason about the software. And to reason about software is to try and understand an argument.

If that is the case, then an argument that is well crafted is:

  1. Clear
  2. Concise
  3. Structured
  4. Has a general structure based on specific facts
  5. Complete
  6. Novel
  7. Builds on previous arguments

All attributes of good software are actually the attributes of good reasoning.

As an argument acquires complexity it becomes harder to follow, harder to re-use, harder to understand, harder to leverage, harder to understand it’s correctness.

And that fact, perhaps, explains why software is so hard. Good software reflects the quality of our ability to reason and articulate our reasoning.

Things like modularity and APIs are really techniques to construct good arguments.

Our software mess wasn’t because we had failed to followed some laws of physics, it’s because we had chosen to become sloppy in our thinking and our reasoning.

What does this mean, practically?

As we try and struggle on how to make software better, what we don’t talk about is the need to improve our engineers ability to reason. Our schools focused on mechanisms and techniques spend very little time on specific training on how to argue.

Perhaps, there is something we need to learn from the law. Unclear.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

How to program in C

June 16, 2014 by kostadis roussos 14 Comments

For lots of reasons this past week I decided to look into books on the C programming language.

There are very few books that have been published in the last 10 years on C.

You’d think that given the amount of code that is still being written, the amount of code that must be supported, new books would exist.

Heck, you would expect web pages to exist.

Instead, crickets.

Or I am just doing my web searches wrong.

The challenge with C, if you’re an author of a book, is that the language is pretty simple. The libraries are also pretty simple. The complexity of the language is that, unlike almost every other language out there, C does very little to obscure or hide the underlying hardware. To program in C is to program, for better or worse, directly on the underlying hardware.

Hardware doesn’t have garbage collection, memory hierarchies exist, CPU’s have error handlers and has registers that need to be carefully programmed. Hardware has errata that makes your code break in weird ways.

There is a temptation to write a book about the C language that quickly turns into an apologia for the limitations of the language definition instead of an exploration of how and why it’s used and the value it brings.

Most texts and books that exist for other programming languages advocate a style of programming that tries to create a nerfed environment that hides the complexity of hardware. The theory of those authors and language being that the physical reality of hardware gets in the way of creating magical software that only the Turning Tar Pit constrains. Heck, Apple just released such a language called Swift to get away from Objective-C because — in many ways — it didn’t abstract the hardware enough.

There is a book crying out to be written about how to program to the hardware-software interface. A book that demystifies a lot of what I have learned through painful, bloody and miserable training.

If someone has a good book, just drop me a comment.

 

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Uncategorized

Caribbean Cruise Lines and the Turning Test

June 5, 2014 by kostadis roussos Leave a Comment

When the Turning Test was first posited, we viewed that as a terrifying and beautiful milestone. Today I see that it was in fact an evil milestone: when computers are able to pass the Turing Test, computers will be used to sell empty seats on the Caribbean Cruise lines.

I got a robocall where the caller was a robot that responded to my questions about cruises before I figured out it was a machine.

2014-06-05_1138

 

The robot call didn’t immediately start spewing about cruises, instead it waited for me to respond. And then the response was contextual… Only when I remembered the stupid robot voice did I realize what was going on.

The Turing Test is an attempt to articulate when a computer is intelligent without trying to understand if a computer is intelligent. The goal was to sidestep the philosophical debates that made the engineers trying to build stuff cry.

The hope was that an intelligent computer could make better decisions than machines because it could make decisions faster with more information in the best case, and in the worst case we could have infinite free labor to free man to have more time for himself.

Except we, computer scientists, lacked vision.

Like the folks in computational photography believed that unless you could make pictures more realistic, there was no point in the technology.

ig-logo

 

I mean who wants a picture that has been made to look worse….

AI missed its true calling. The Turning Test is really about sales people calling you 24×7 to irritate you with offers you don’t want.

 

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

How Firewalls Killed Zynga or Why the Mobile Internet Took Off in 2012

May 25, 2014 by kostadis roussos 7 Comments

In life, when confronted with a mystery I always use this principle:

Occam’s conspiracy razor: Never ascribe to malice when stupidity will suffice. 

In 2012, Zynga experienced a rapid decline in numbers while at the same time mobile gaming and mobile traffic took off. The expectation that mobile would take over wasn’t a surprise. We all knew it was going to happen, what was surprising was how fast web based browsing dropped.

Mobile was going to be big, but why was it at the price of web? Why did the web die out?

Occam’s conspiracy razor was that people just wanted to move to mobile, and that 2012 was the year it happened. There was no conspiracy, no external event, just a mass migratory movement.

Early 2012

In early 2012, as I looked over the landscape of mobile internet, my reaction was that this was going to be a platform that added more people, not took away from the web. Adults would both  use the mobile device and the desktop to do all of their activities. Most people, sitting in front of computers during working hours, would play their games, do their Facebooking, do their shopping on their computers while working. Later in the day, when they went home they would use their mobile devices on the couch while they hung out with their families and friends.

That. Did. Not. Happen.

At the end of 2012, we could confidently call the death of the desktop web. Desktop DAU for Zynga cratered, and Facebook web DAU had flatlined.

What happened?

What happened was that while Zynga and Facebook were creating the ultimate form of escapist fun for the masses, corporate America noticed.

And what they noticed was that web-surfing that had been an annoying tax on employee productivity was becoming a massive time sync. Employees were playing their Zynga games, sharing on Facebook with their family and friends, instead of doing theirs jobs and incidentally consuming vast amounts of bandwidth. Later Netflix entered the picture and the amount of lost productivity and bandwidth was getting serious.

Before Facebook, social non-professional interaction was hidden in corporate and private email or the phone. The water cooler, the smoke break, the lunch room, the break room where we hid minutes of wasted time every day.

Before Zynga, it was really hard to play a game on a corporate laptop. The security teams locked computers down so tight, that nothing could be installed. And to be honest, you’re not going to take a 5 minute break to play some Call of Duty. Most employees played Minesweeper because that was the only game they could install.

So lost productivity was visible but unblockable.

Enter Zynga. Farmville at it’s peak had 30 million DAU. That’s an insane number for a game. 30 million people were not playing at home. Peak DAU hours were during the working day.  Corporate America noticed.

At the same time as Facebook and Zynga were taking over the world, ng-fw‘s (next generation firewalls) came into existence. Their claim to fame was that they could identify the applications and then apply security policies at the application layer.

Apparently, as I recently discovered, for the last several years the basic pitch of an ng-fw begins with:

How do you stop your employees from playing Farmville?

Obviously it’s more nuanced … Not really. I mean sales guys as recently as last week positioned ng-fw as how to block people from playing Farmville. The thing was that corporate america figured out how to control what applications their employees were using, even when they were on the web.

Managers wanted to stop their employees from goofing off, and since employees were using communication channels that the managers could choke, management did.

What about SSL? Well it turns out that the firewall vendors figured that out as well. The firewalls would terminate the SSL connection to the external site, and then in turn re-sign the certificate. The IT teams would helpfully disable the warnings associated with the re-signed certificate making people oblivious to the fact that their traffic was being man-in-the-middled.

Net effect, throughout corporate America, firewalls were quietly and silently blocking access to consumer web-sites that employers felt were not really related to work.

So what does this have to do with Zynga?

At the same time, at Zynga I was observing this really weird unexplained phenomenon.I was  seeing evidence of people trying to start our games and failing. Stuck in the cauldron of Zynga, we had no insight into what was going on.

We assumed, applying Occam’s razor, that people were starting to load the game and then quitting. The sheer scale of attempts was mystifying, but we assumed that people had their reasons.

I mean, how else could games be blocked?

Last week I went to Hawaii and experienced ng-fw first hand.

1. Sometimes I couldn’t load the game.

2. Sometimes I could load the game but then the connection would be dropped.

And I realized that our customers had been blocked from accessing our games.

And then it clicked. All of those mysterious reports, all of those users who couldn’t get to our games, all of those bizarre late night sessions trying to understand what the hell was going on, and we missed the most obvious thing of all:

Employers didn’t think their employees should be playing games at work and were doing something about it.

So why was this bad for Zynga?

Let me caveat the next bit with the following: Mark Pincus never accepted the idea that Zynga was a victim. We didn’t fail because other people did things to us, we failed because we didn’t execute because we didn’t deliver. And I agree. If we failed, it’s because we failed not because other people screwed us.

Let me also caveat, that this is pure speculation. I don’t have any data to back any of this up. At the time I didn’t know what to collect, and now I don’t have access. 

DAU and Engagement

Spend enough time on the Facebook platform, and you know that the only thing that matters is engagement. Facebook wants people to be engaged with their product, they’ll flood you with users to see if they’ll stick, but if you don’t keep those users be afraid, because eventually they’ll point those users somewhere else.

Firewalls were breaking Zynga’s engagement in two ways.

The first was that users were unable to get to the game. With games that had a plant-harvest cycle, if you can’t make it into the game to harvest, then you’re likely to quickly give up with the game itself.

The second was that users were conditioned to stop clicking on our feeds and virals. If clicking on the feed or viral would result in nothing because of firewalls, people stopped clicking on them.

As a result Facebook was seeing a decline in engagement in Zynga games. And because they care about their users, first and foremost, they started to point their users away from our games. And good for them.

Unfortunately this fed a vicious cycle of decline. The more our users left, the more we tried to reach them through virals, the more Facebook users got annoyed with those virals, the more Facebook throttled our virals. And because Facebook was always looking for some other content they could get people engaged with, Facebook pointed the firehose of users elsewhere.

Enter mobile

If there was no mobile platform, I posit Facebook would have seen a flatlining of users as did Zynga. Instead what happened was that people discovered the one device that had a network their corporate bosses could not control, their mobile phone.

With their access to Facebook blocked, employees discovered what teenagers have known since forever, mobile phones are the only way to talk to your friends without anyone getting in the way.

Much like teenagers who used SMS as a cheap way to talk forever with their friends, adults reached out to the mobile device as a cheap way to keep connected once their internet access got blocked.

And mobile Facebook traffic skyrocketed.

And once everyone got used to using their mobile device instead of their corporate laptop to connect with the Internet, the rest was history.

The transition was inevitable, and firewalls forced the transition to happen ridiculously fast in 2012.

Some final thoughts

When I look back at my own mistakes, what I missed about mobile was not the form factor but the access to the users.

At the end of the day, consumer websites live and die by their ability to reach end-users. And unfortunately, the last mile is controlled by corporate entities that may have a dim view of people goofing off on the job.

Mobile is the future because it represents the only reliable way for consumer businesses, net-neutrality excepting, to reach their customers without some intermediary blocking them.

What I missed at Zynga wasn’t mobile, it was the fact that only on the mobile platform could we be guaranteed access to our users.

 

 

 

 

 

 

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Facebook, innovation, Security, Selling, Zynga

Voodoo Debugging in the Age of the Cloud

May 21, 2014 by kostadis roussos Leave a Comment

Software engineers, for all of their deep rational view of the universe, tend to sometimes approach debugging like soothsayers inspecting entrails trying to divine what the hell is going on.

 

The scene played out like this:

Bug report: XyZZy is happening

Engineer stares at bug report, stares at software, stares at test case that doesn’t reproduce it.

Engineer then tries things that make sense.

Bug doesn’t go away

Then Engineer tries random acts of coding to see if the bug goes away.

And his peers start saying: Wow, yeah I think I saw something like that. Did you try the toggle the frommlebit?

And the engineer says: But this has nothing to do with the frommle bit, this is tied to pluxion submodule.

And another peer says: Trying adjusting the memory size of the core structure.

And the engineer says: But my code never touches that core structure!

And another peer says: We’ve had success with this kind of problem that we don’t understand when we altered the return value of the function..

And the engineer says: But this makes no sense.

And they all respond like a Greek chorus: But it sometimes works.

At this point in time the engineer is hoping and praying that just moving the code around will cause the bug to go away.

My buddy and I used to call that last stage of debugging “Voodoo debugging”.

voodoo debugging

You don’t understand the problem, you don’t understand why the bug is happening, you have a deadline to meet, so you try random things hoping that the bug will just go away.

And the sad truth is that the simple act of moving things around may cause the bug report to go away. The bug may reappear later in some other poor bastards bug queue. And the Voodoo debugging will continue as people try this and try that.

In the pre-cloud era of computing, when software engineers believed that it was their divine right to understand how it all fit together, this was viewed as a moral failing. Back in the day we could understand how the entire system from the silicon up worked and we could use that information to thoroughly debug and understand the problem.

Those who lacked the moral fiber and intellectual rigor to do real software engineering practice Voodoo debugging. The rest of us understood how the entire system fit together and resolved the issue down to it’s root cause.

Then I moved to Zynga. And at Zynga it became apparent that Voodoo debugging wasn’t just a product of a lack of moral fiber or intellectual rigor it was a rational response to the environment.

I talk about this a lot in an answer on Quora. The essential elements of the arguments are that systems are in such continuous flux that understanding how the system is working in its entirety is impossible and worse by the time you understand it, the system has changed.

Bugs come and bugs go. And bugs have an element of Voodoo magic about them. Trying to understand bugs and resolving them is impossible. We are now in the era of trying to resolve issues without fixing them. We have become doctors who offer palliatives rather than cures in the hope that the problem will just go away.

And so I embrace my inner Voodoo doctor. I will try the magic cants, I will pray that the next update of the API I am using fixes the problem, and I will internalize the truth that finding the root cause is an illusion my earlier me had.

 

 

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

Joe Thornton and Smart Systems

May 11, 2014 by kostadis roussos Leave a Comment

On Dec 1st, 2005, the Boston Bruins made a catastrophically piss poor decision to trade Joe Thornton to the San Jose Sharks. As a result of that trade, San Jose became a perennial contender. Boston did eventually become a great great team, but the trade of Thornton could only be described as the agonizingly slow route.

The rationale for the trade was the belief in the gutless Thornton mystique, a story created from the nonsense of his first foray into the playoffs when with a broken rib he was only able to get one measly assist in a 7 game series loss to the Montreal Canadiens.

Over the years I have seen Joe Thornton carry mediocre San Jose teams to the playoffs only to be continuously thwarted by Doug Wilson’s inability to get a decent third line…

Or so I think …

My point in all of this, is that Joe Thornton, to anyone who knows hockey, is most definitely not a Bruins player…

Unfortunately Facebook’s algorithms are a little bit less aware of the NHL and hockey… and produced this trending result:

Facebook 

  • Joe Thornton: Bruins’ Thornton fined $2,800 for squirting water

And herein lies the danger of computers and their lack of semantic awareness. They create such delightful moments of absurdity. The algorithms confused Joe Thornton with Scott Thornton.

Without any knowledge of the internals of the implementation, I could suppose that Facebook used the fact that I post stuff about Joe Thornton, including posts I have made about how he once played for the Bruins to surface a note about Scott.

This is the inherent danger in an era of big data. We have all of this data that we can then use clever algorithms to create interesting and useful results that we can immediately share with the world. And most of the time the results are good.

The problem arises when the result is not good

And it’s not that computers are less reliable than human beings, humans are just as prone to absurd errors, the problem with computers is how we humans are interpreting the results. Or choosing to share the results without interpreting them because they are correct most of the time.

The key difference with the past is that the computers had access to less data, so our ability to find relationships where none existed was limited. We knew that the data was too small to trust any result that did not fit our intuition.

But now, with so much data, who is to say that the relationship doesn’t exist?

Perhaps our clever algorithms did find relationships that were only possible because of the amount of data we had collected.

And so we need to be careful and humble as we look at the results. Sometimes we will find things in the data that we didn’t know existed, and sometimes we might be looking at the night sky and seeing a horse in the stars.

Because, I can assure you that Joe Thornton is both not a Bruins player and not playing hockey right now.

And as I review my post, it’s tempting to ask myself – maybe the data is telling me a truth that Joe Thornton is not as great in the playoffs as I think he is. Maybe I need to listen the fact that he was -6 and scored only 3 points  this year and that that had something to do with the performance of the Sharks. Maybe. Or not.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

Big steak vs Tasting menus and the difference between storage and network system architecture

May 4, 2014 by kostadis roussos Leave a Comment

As a lover of fine cuisine, I am always struck at the difficulty of choosing between a tasting menu and a massive steak. The tasting menu is about sampling a little bit of a lot of things. The chef is trying to maximize the number of unique flavors over calories. Whereas when you eat a steak you are trying to maximize the number of steak calories.

Tasting menu is about doing a lot of little things.

Steak is about doing a lot per thing.

And that is a very crude analogy about the difference between storage and networking.

If we consider a packet a unit of work, storage systems expend a lot of resources per packet as compared to network devices and so process vastly fewer packets.

Storage system architecture must optimize the use of very slow devices. The slower the device the more computation value you get out of clever algorithms on a per packet basis if they can avoid going to disk. So storage uses complex in-memory data structure with complex algorithms that blow out data caches because storage systems have to. The reason flash was so disruptive to the storage industry is because the amount of computation per packet became less as the performance of flash closed the CPU vs storage performance gap.

Network system architecture must, on the other hand, do the minimal amount of work because it is inherently slowing down the flow of electrons. A beam of light down a cable that hits a network device feels like it went from light speed to impulse speed. As a result networking systems must do heroics to minimize the amount of computation and simultaneously make the computation go as fast as possible.

Practically what this means is that a network system can benefit hugely from custom specialized computational hardware whereas storage can not.

In English: ASIC for networking making sense, no one uses anything but Intel CPU for storage. And every company that tried to use custom hardware in
storage.

This answers a long standing question I had, as to why storage companies never used custom computational hardware and networking companies did.

It also answers why server companies go after storage and not networking: the hardware server companies build is more aligned to a storage problem than a networking problem. For example Sun and Dell and HP went after storage in a way they never went after networking.

This also answers the mystery Cisco. Much like Intel won the server CPU war, a single dominant player can emerge in networking given the need for custom hardware. At some point in time only a small number of players can play the game.

And last it answers the software defined storage mystery. In software defined networking we move control out of the hardware device. Software defined storage just says I can run control and data path code on any piece of hardware… This is an important distinction that is not obvious.

Coolness.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation

  • « Previous Page
  • 1
  • …
  • 18
  • 19
  • 20
  • 21
  • Next Page »
 

Loading Comments...
 

    %d