wrong tool

You are finite. Zathras is finite. This is wrong tool.

  • Email
  • LinkedIn
  • RSS
  • Twitter

Powered by Genesis

Open Facebook API or what to do about Facebook

December 28, 2019 by kostadis roussos Leave a Comment

When I left Zynga in 2013, I was convinced that Facebook was a malevolent entity run by leaders who could not be trusted. But I was also bitter about a 6$ stock price and my life choices.

Fast-forward to 2019, and it turns out that what I thought was just sour grapes, undersold the net harm Facebook has created.

An option that isn’t considered very seriously is the following simple proposal. Don’t break up Facebook, but regulate the access to and control of the friend graph and the ability to use the friend graph to publish information.

In 2012, when Facebook and Zynga stood off, the debate that was at the heart of the disagreement was ownership of the friend graph. Facebook believe they owned the friend graph and by extension owned how it could be used. We disagreed. In the end, we caved. I know this because I worked on the software systems necessary to create a parallel friend graph of people who were friends with other people who played Zynga games.

Facebook would love for us to spend time talking about breaking things up, instead of talking about the one thing that matters, a regulated open-api and regulated data portability.

Consider the messenger space. Because the friend graph is in my personal address book, it’s trivial to talk to several dozen different friends. Because the content is on my phone, typically pictures or documents, I can share anything with anyone.

Consider how many more messenger apps there are, versus how many social networks there are.

But let’s look to the past. During the failed MSFT anti-trust trial, a peculiar part of the agreement said that MSFT could no longer have private APIs, and that they had to communicate changes in a very specific public way.

This ruling enabled NetApp, which had built a reverse engineered CIFS server to survive and thrive. Because MSFT was losing the CIFS business, it also pushed MSFT to look for alternatives to CIFS, like SharePoint for document sharing and collaboration.

But over the long term, it enabled companies like Box and Google Drive and other file-sharing companies to emerge. Without the guarantee that a single man couldn’t break an API, a healthy and vibrant ecosystem in data storage has emerged.

If we had an open-social graph, and an open api, and data portability then I suspect that over time new social networks would emerge. Every social network would probably cater to different kinds of people.

In many ways Facebook does this today with Facebook Groups. For example, I happen to have joined two Facebook groups, one dedicated to old-school rpg, and another to 5E. The two groups hate each other. But because my social graph is portable, I can communicate to both groups within facebook.

Or we can even go back to Facebook’s origins. When Mr. Zuckerberg opened up the API, he promised it was going to be open and portable. He lied, of course, but not before Mark Pincus and Zynga figured out how to exploit the graph to grow Facebook’s business. Once, Mr. Zuckerberg figured out that owning the graph and how you communicate with it was very valuable, he squashed us like a bug. And destroyed the Facebook app eco-system.

Which brings me to regulation, we can’t trust Mr. Zuckerberg . Like we couldn’t trust Mr. Gates. And breakups don’t always work. Look at ATT, 40 years after the breakup, they control everything, again.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Facebook, Net Neutrality, Software, Storage, Zynga

Is the great cloud shell game over?

March 3, 2019 by kostadis roussos 3 Comments

With the success of VMC on AWS, it’s time for us to admit that the Cloud Native programming model as the dominant and only programming model for the cloud is dead.

The Cloud Native programming model was a godsend for IT and software engineers. The move from CAPEX to OPEX forced all of the pre-existing software that ran on premises to be completely re-written for the cloud.

Jobs, careers, and consulting firms exploded as everybody tried to go on this cloud journey.

It was like the great y2k rewrite, which was followed by the C++ rewrite, which was in turn followed by the great Java rewrite …

This rewrite was forced because On-Prem software assumed infrastructure that did not exist in the cloud and could not work.

On-premises, you have very reliable and robust networks, storage that offers 5 9’s reliability, and a virtualization infrastructure that provides automatic restart-ability of virtual machines.

Furthermore, on-prem you had the opportunity to right-size your VM to your workload instead of playing whack-a-mole with the bin-packing strategy known as “which cloud instance do I buy today?”

The problem with on-prem was that you had to buy the hardware, and worse you had to learn how to operate the hardware.

The operating environment for the cloud where networks are unreliable, storage is unreliable, and applications must be HA aware and integrate with HA aware systems required a full-rewrite of perfectly fine working software.

What motivated this mass migration and mass rewrite?

The motivation was that new software could legitimately be written faster on the EC2 PaaS. Furthermore, companies didn’t want to own the hardware and wanted to rent their infrastructure.

The two factors pushed companies to look at Cloud Native not as a way to augment their existing software assets, but a once-in-the-lifetime opportunity to rewrite their software assets.

But it turns out that is hard. And it also turns that the pre-existing operating model on premises is kind-of-valuable. Instead of every application having to figure out how to deal with infrastructure that’s flaky, it’s just simpler to have a more robust infrastructure.

And now that the cost and agility advantage of the cloud has been in-part neutralized, what I hope we might see is a collective pause in the software industry as we ask ourselves – not whether it’s on-prem or cloud-native, but what is the appropriate programming model for the task at hand.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Hardware, innovation, Software

The brakes have brains

February 13, 2017 by kostadis roussos Leave a Comment

Fascinating article about Bosch (https://www.technologyreview.com/s/601502/boschs-survival-plan/).

A couple of things that popped out:

  1. Factories are turning into computers. The interconnections between machines, originally a human task, is now a machine task. In 20 years, a human on the shop floor may be as ridiculous as humans swapping out transistors in an x86 processor.
  2. Data-driven optimization is getting faster.  A core fallacy of data-driven product design is that it can drive new products. However, the use of analytics can make existing products more efficient. The use of pre-existing wireless networks will allow devices to communicate with home base very efficiently, coupled with factory floors that can be optimized faster, this has tremendous implications on product life-cycle.
  3. Humans who rely on brawn or physical stamina are losing value fast.
  4. There is an interesting singularity when the entire manufacturing pipeline when 3d printing and data-driven design and fully automated factories intersect in a meaningful way. Factories will be able to retool instantaneously to meet instantaneous demand and insight.

The world of yesterday is going away so fast, the only question is whether we will survive to get there.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Hardware, Software

The archer and the gun and misunderstood impact of automation

December 29, 2016 by kostadis roussos 2 Comments

Last night I went to a great burger place in Arnold, called The Giant Burger. And I sat there waiting for my burger to arrive, I had a chance to reflect on the impact of automation.

The Giant Burger is not a fast place. It’s a place for great food. Not a place for getting great food fast. The reason is one of the employees will carefully assemble each burger to order. And because orders are big, and because she is one person, orders come out at about the rate of six per hour.

And as I was staring at her and thinking about machines, I was wondering what do machines do?

What machines do isn’t replace human beings. What they do is make less skilled workers more skilled.

Consider in the middle ages the archer. Being an archer requires a lot of skill and practice. You had to train from a young age and continuously hone your craft. In some sense, you could argue that archers were the artisans of war.

And then the gun showed up. And it wasn’t more reliable and more efficient than the original long-bow, but you could find 50 people hand them 50 fifty muskets and do almost as much damage as the archer.

In short, the gun made large armies of archers possible by reducing the skill requirement.

And that happens over and over and over again.

Look at the modern military drone. I can’t fly an F16 because I am too old, too tall and too fat. I could fly a drone. And there are more middle-aged fat guys than there are highly trained fighter pilots.

And so what happened?

We have drones all over the world killing random terrorists because we can have armies of fat people sitting in rooms flying a robot.

We have more people killed from the air than at any time since the Gulf War, and not a single pilot has done the kill.

Or look at the DaVinci system for surgery. To date, surgery was about skill. Surgeons were more athletes than scientists. With DaVinci, the skill necessary to do surgery will decline over time.

What automation does, what machines do, is they reduce the value of specialized skills and democratize those skills. And in the process make the value of the human labor declines because the number of people who can do the task increases, thereby reducing salaries.

And now software is making it worse. In the past, upgrades required new physical systems, now with software we can upgrade existing systems in place. And because of how electronics work, we can improve the intelligence of systems at the rate of 2x every 18 months.

And where it gets interesting is that in the past before software, mechanical systems had to be carefully engineered. For example, a mechanical lever has less tolerance than a computerized control system that can make micro-adjustments very quickly.

In short, we can innovate faster and cheaper than ever before in creating machines that make anybody be able to do anything.

Automation isn’t about replacing people; it’s about eliminating the need for skill and with that we remove the value of training and with that, we replace the highly trained archer with conscripts.

Which begets the obvious question:

So what?

Given that the value of skill is declining faster and faster, then that implies that the value of most human labor is decreasing, and therefore the per-unit cost of paying someone to do the job is below what people would accept.

And so when we say: Automation is killing jobs, what we are saying is that automation is causing the price we are willing to pay for humans to do jobs is decreasing.

And then we get to the policy prescriptions.

1. Some kind of universal income

One approach is to realize that there is a net surplus labor force at the current labor price, a price artificially kept high because of the minimum wage, medicare, food stamps, etc. And recognize that that group of people is going to have to die off, or leave the country for the surplus to get eliminated and in the meantime continue to extend those benefits including something like a universal income.

The problem is that that group of unemployable people is going to expand over time.

And the other problem is that there will be an increasingly shrinking set of people who will subsidize the lives of those whose skills have no value at the current price.

2. Make human labor competitive by retraining

This approach recognizes that it takes some time to build computer systems that can replace all skills and that the computer systems themselves need human operators. And so we continuously retrain people.

The challenge is that during retraining people are not employed and post-retraining the value of the labor is low. And so humans continue to experience points in time where they make less money and don’t have access to a stable income.

This also has the problem that the cost of the training has to be covered. And the folks who are making money will resent that their money is helping other people.

3. Make human labor competitive by lowering price and over time increase the price by reducing the number of people in the labor supply.

Another policy prescription is to cut those benefits such that the surplus labor becomes competitive with machines at a much lower price point, and then rely on other policies to cause the labor pool to shrink over time.

For example, a starving man will work for less than $7.25.

Cut his medical coverage, and a sick person will die off quickly.

Cut off his social security, and when he is too old to work, he will die of hunger and illness.

Restrict immigration and the number of people who enter the country will decrease over time.

The net effect will be that surplus labor will decline over time. In the short term there will be some pain, but in the long run, this will work out.

In the press, there is some discussion of the heartlessness of the tech industry because we create the machines that displace skill.

Tech is amoral. Our policy prescriptions are moral. If you are outraged with the outcomes of an amoral device, go ask yourself what policy prescriptions do you favor?

 

 

 

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Jobs, Software

Let’s start using Technical Leverage instead of Technical Debt

June 29, 2016 by kostadis roussos 3 Comments

Over the last year, I’ve been struggling with the term technical debt.

The theory behind technical debt is that there are choices we make that cost money later. And that’s motherhood and apple pie.

The problem with that phrasing is that there is an implicit assumption that technical debt is a bad thing because all debt is bad.

And that is just profoundly a wrong conclusion.

Debt is how you get leverage in the business, and it’s how you get leverage in time in engineering. And engineering is a tradeoff between time and resources.

More generally, because of the negative connotation of debt, the theory of technical debt says that:

Engineering tradeoffs aligned with business priorities are bad if hurt they architecture

And that is the wrong answer. Because if the business priorities result in growth and success, then this was the right tradeoff between time and technical correctness.

Engineers can use leverage to go faster, and like businessmen we can overdo it. And when we do — well there are consequences.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Software

How to build a product

February 14, 2016 by kostadis roussos Leave a Comment

There are two fundamental approaches to building products, a technology first and a customer first approach.

The technology first approach examines what is possible and based on what is possible builds something.

The customer first figures out what customer need is required and then builds something to satisfy that need.

I have had the opportunity to pursue both approaches in my career. And what I have observed is that they can both result in poor results.

The technology first approach can produce something that no one wants.

The customer first can produce a few deals that never grow past a certain point.

At one of my jobs, the GM had the head of product management and myself at each other’s throats. The head of product was very customer centric. I was very technology focused. And the GM would only approve a new project if we both agreed. Sometimes, the head of product would wear me down, and I would grumpily agree with his ideas. Sometimes, I would wear the head of product down and he would grumpily go along with mine.

And those ideas had marginal success.

The best products, the ones that were huge successes were the ones where we both could see how this would satisfy the customer need and that there was real technology innovation.

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: innovation, Selling, Software

Scaling efficiently instead of scaling up or out.

November 11, 2015 by kostadis roussos 1 Comment

Over the last few months, I’ve been involved in a lot of discussions about how to make software systems more efficient.

When we look at making software go faster, there are three basic approaches

  1. Pick a better algorithm
  2. Rearchitect software to take advantage of hardware
  3. Write more efficient software.

From about 1974 when Intel introduce the original 8080, up until 2004, conventional wisdom was that writing more efficient software was a losing proposition. By the time the more efficient software was written, Intel’s next generation processor would be released improving your code’s performance. The time you spent on making your software go faster, represented a lost opportunity to add features.

As a result, a generation of software engineers was taught of the evil of premature optimization.

Textbooks, and teachers routinely admonished their students to write correct code, and not efficient and correct code.

Starting in 2005 with the shift to multi-core processors making software go fast was about taking advantage of multiple cores.

Software developers had to adapt their systems to be multi-threaded.

At the same time, software developers noticed that the number of cores per system was limited and to get ever increasing scale, they had to be able to leverage multiple systems.

And thus the era of scale out distributed architectures began.

In this era, software engineers had to create new algorithms and new software architectures, and writing efficient code was still not viewed as an important part of delivering ever faster software.

See from 1974 to 2015, the name of the game was to use more and more hardware to make your software go faster without any consideration to how efficient the software is. From 1974 to 2004, you just waited for the next processor. From 2004 to 2015 you re-architected your software to take advantage of more cores and then later to scale out to more systems.

And by 2012, writing large scale distributed systems was easy. A combination of established frameworks and patterns made it easy to build a system that scaled to hundreds of systems.

Software engineering had discovered the magic elixir to ever increasing performance. We could harness an increasingly large number of systems combined with multi-threaded code to get infinite performance.

If the 1975-2004 era made writing efficient code of dubious value, the scale-out age made it even more questionable because you could just add more systems to improve performance.

High-level languages, coupled with clever system architectures could make anyone deliver an application at scale with minimal effort.

Was this the end of history?

No.

It turns out that large scale-out systems are expensive. Much like processors hit a power wall, massive data centers that consume huge amounts of energy are expensive. And companies started to wonder how do I reduce the power bill?

And the answer was to make the code go more efficiently. And we saw things like Hip-hop emerge, and Rust. Hip-hop tried to optimize code. Rust tries to provide a better language for writing efficient code. And in parallel we see programming languages like Node.js and Go become popular because they allow for more efficient code.

Software efficiency has become valuable again. The third pillar of software performance after a 40-year wait is the belle of the ball.

And what is interesting is that the software systems of the last 40 years are ridiculously inefficient. Software engineers assumed hardware was free. And because software engineers made that assumption, large chunks of software are very inefficient.

The challenge facing our industry is that to improve the efficiency of software we will either have to rewrite the software or figure out how to automatically improve performance without relying on hardware. No white knight is coming to save us.

And we are now looking at the world where performance and scale are not just going to be a function of the algorithms, and the architectures, but of the constants. And in this brave new world, writing efficient and correct code will be the name of the game.

We will not only have to scale out and up, we will also have to do so efficiently.

Put differently, perhaps there is no longer such a thing as a premature optimization?

 

 

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Software

Packet re-ordering is bad.

September 13, 2015 by kostadis roussos Leave a Comment

One of the weirdest things at Juniper was the obsession the networking teams had about reordering packets. They kept talking about how applications could not tolerate reordering.

And this confused me to no end.

After all TCP was based on the assumption of packets being reordered and out-of-sequence and surviving that mess?

And then it was explained to me as if I was the networking NOOB that I am. The problem is that when a packet gets reordered TCP doesn’t perform as well as when the packet gets sent in order. And there are scenarios where TCP will assume that the network is congested if the packet doesn’t get sent in time and will slow down the network connection.

And so to work around the TCP protocol thinking it understands what is going on in the network, ASIC engineers do heroics to ensure that packets flow through routers in order.

Then I read this today and I was reminded of those conversations:

http://gafferongames.com/2015/09/12/is-it-just-me-or-is-networking-really-hard/

There are all sorts of very interesting applications that run over the Internet that really are just pumping packets and want them arriving in order or not at all.

And that because of these applications the design complexity of routers is vastly more complex than if the layers above the network did not assume anything about a reordered packet.

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Hardware, Software

The completely misunderstood IOPS

September 1, 2015 by kostadis roussos Leave a Comment

I was recently in a meeting about performance where the discussion turned to how many IOPS was the database doing.

And what was interesting was how much of our thinking about performance is formed in a world where IOPS are a scarce resource because the underlying media was soooo slow.

In the modern, post spinning rust world, IOPS are practically free. The bottleneck is not the underlying media, SSD’s and later things like 3D Xpoint memory (what a horrible, horrible name for such an important technology) have essentially free IOPS. The bottleneck is no longer the media (disk drive) but instead the electronics that sit in front of the media.

The electronics include things like networks, memory busses, and CPU’s. We are now bandwidth and CPU constrained, no longer media constrained. What that means is – of course – interesting.

One practical consideration is that looking to optimize IOPS is no longer a worthy effort. Instead, we should be looking at CPU and Memory cost per IOP. And we should be willing to trade off some CPU and Memory for more IOPS to improve overall system behavior.

For folks, like myself, who grew up working really hard to try and avoid doing disk operations, embracing IO is going to be hard…

And like a buddy of mine once said, these material scientists keep investing these new exotic technologies that keep us system software engineers busy.

It’s a good time to work in systems.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Hardware, Software, Storage

Metrics over usability 

August 30, 2015 by kostadis roussos Leave a Comment

  
This is the kind of shit that drove Zynga customers nuts.

In an attempt to drive metrics to other features … We add friction to the top activity… I didn’t know about collages and not do I care to know about them and I certainly don’t want to be reminded of them all of the time.

I used to be able to just enter a status, now I have to pick one.

This is just another example of egregious Facebook metric driven feature – like the hyper aggressive attempts to get me to turn on notifications.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Software

  • 1
  • 2
  • Next Page »
%d