wrong tool

You are finite. Zathras is finite. This is wrong tool.

  • Email
  • LinkedIn
  • RSS
  • Twitter

Powered by Genesis

My home setup is a full rack.

March 9, 2021 by kostadis roussos 1 Comment

As part of this insane work from home experience, I am discovering the joys of internal PCIe devices and integrated enclosures.

Because of my job, and poor technology choices, and desk space issues, I ended up buying a bookshelf for my office computing systems.

By the time I was done, I had realized I had assembled a server rack.

Yes, a server rack.

Why?

First, I need a surface for the home laptop and the cooling system for said laptop.

Then I need another place for the work laptop, and a cooling system.

Then I need a USB hub for some devices that I want to connect to each laptop. Those external USB devices have to be connected to hubs that have enough bandwidth and power. This is the mother of all Pains in the Asses. Individual USB devices do a really shitty job of arbitrating power draw and bandwidth. Be careful to connect the 4k HDMI device on a USB port that isn’t the same one you have on a USB drive. I can’t wait for the USB monitoring system.

Then I needed a place for the external storage array.

Then I needed power-supply for all of my other computer gadgets (Headphones, tablet, smart-pen)

Then I need a top-of-rack switch that connects all of the elements in the rack.

Then I need a core switch that connects to my home network.

Then I need an edge router and firewall to connect to the internet.

But I am not done!

Because one is my work laptop, and the other my personal laptop, I need a KVM switch.’

AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA

Back to your regularly scheduled architect discussion.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Hardware, Random Fun

Is the great cloud shell game over?

March 3, 2019 by kostadis roussos 3 Comments

With the success of VMC on AWS, it’s time for us to admit that the Cloud Native programming model as the dominant and only programming model for the cloud is dead.

The Cloud Native programming model was a godsend for IT and software engineers. The move from CAPEX to OPEX forced all of the pre-existing software that ran on premises to be completely re-written for the cloud.

Jobs, careers, and consulting firms exploded as everybody tried to go on this cloud journey.

It was like the great y2k rewrite, which was followed by the C++ rewrite, which was in turn followed by the great Java rewrite …

This rewrite was forced because On-Prem software assumed infrastructure that did not exist in the cloud and could not work.

On-premises, you have very reliable and robust networks, storage that offers 5 9’s reliability, and a virtualization infrastructure that provides automatic restart-ability of virtual machines.

Furthermore, on-prem you had the opportunity to right-size your VM to your workload instead of playing whack-a-mole with the bin-packing strategy known as “which cloud instance do I buy today?”

The problem with on-prem was that you had to buy the hardware, and worse you had to learn how to operate the hardware.

The operating environment for the cloud where networks are unreliable, storage is unreliable, and applications must be HA aware and integrate with HA aware systems required a full-rewrite of perfectly fine working software.

What motivated this mass migration and mass rewrite?

The motivation was that new software could legitimately be written faster on the EC2 PaaS. Furthermore, companies didn’t want to own the hardware and wanted to rent their infrastructure.

The two factors pushed companies to look at Cloud Native not as a way to augment their existing software assets, but a once-in-the-lifetime opportunity to rewrite their software assets.

But it turns out that is hard. And it also turns that the pre-existing operating model on premises is kind-of-valuable. Instead of every application having to figure out how to deal with infrastructure that’s flaky, it’s just simpler to have a more robust infrastructure.

And now that the cost and agility advantage of the cloud has been in-part neutralized, what I hope we might see is a collective pause in the software industry as we ask ourselves – not whether it’s on-prem or cloud-native, but what is the appropriate programming model for the task at hand.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Hardware, innovation, Software

The brakes have brains

February 13, 2017 by kostadis roussos Leave a Comment

Fascinating article about Bosch (https://www.technologyreview.com/s/601502/boschs-survival-plan/).

A couple of things that popped out:

  1. Factories are turning into computers. The interconnections between machines, originally a human task, is now a machine task. In 20 years, a human on the shop floor may be as ridiculous as humans swapping out transistors in an x86 processor.
  2. Data-driven optimization is getting faster.  A core fallacy of data-driven product design is that it can drive new products. However, the use of analytics can make existing products more efficient. The use of pre-existing wireless networks will allow devices to communicate with home base very efficiently, coupled with factory floors that can be optimized faster, this has tremendous implications on product life-cycle.
  3. Humans who rely on brawn or physical stamina are losing value fast.
  4. There is an interesting singularity when the entire manufacturing pipeline when 3d printing and data-driven design and fully automated factories intersect in a meaningful way. Factories will be able to retool instantaneously to meet instantaneous demand and insight.

The world of yesterday is going away so fast, the only question is whether we will survive to get there.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Hardware, Software

Packet re-ordering is bad.

September 13, 2015 by kostadis roussos Leave a Comment

One of the weirdest things at Juniper was the obsession the networking teams had about reordering packets. They kept talking about how applications could not tolerate reordering.

And this confused me to no end.

After all TCP was based on the assumption of packets being reordered and out-of-sequence and surviving that mess?

And then it was explained to me as if I was the networking NOOB that I am. The problem is that when a packet gets reordered TCP doesn’t perform as well as when the packet gets sent in order. And there are scenarios where TCP will assume that the network is congested if the packet doesn’t get sent in time and will slow down the network connection.

And so to work around the TCP protocol thinking it understands what is going on in the network, ASIC engineers do heroics to ensure that packets flow through routers in order.

Then I read this today and I was reminded of those conversations:

http://gafferongames.com/2015/09/12/is-it-just-me-or-is-networking-really-hard/

There are all sorts of very interesting applications that run over the Internet that really are just pumping packets and want them arriving in order or not at all.

And that because of these applications the design complexity of routers is vastly more complex than if the layers above the network did not assume anything about a reordered packet.

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Hardware, Software

The completely misunderstood IOPS

September 1, 2015 by kostadis roussos Leave a Comment

I was recently in a meeting about performance where the discussion turned to how many IOPS was the database doing.

And what was interesting was how much of our thinking about performance is formed in a world where IOPS are a scarce resource because the underlying media was soooo slow.

In the modern, post spinning rust world, IOPS are practically free. The bottleneck is not the underlying media, SSD’s and later things like 3D Xpoint memory (what a horrible, horrible name for such an important technology) have essentially free IOPS. The bottleneck is no longer the media (disk drive) but instead the electronics that sit in front of the media.

The electronics include things like networks, memory busses, and CPU’s. We are now bandwidth and CPU constrained, no longer media constrained. What that means is – of course – interesting.

One practical consideration is that looking to optimize IOPS is no longer a worthy effort. Instead, we should be looking at CPU and Memory cost per IOP. And we should be willing to trade off some CPU and Memory for more IOPS to improve overall system behavior.

For folks, like myself, who grew up working really hard to try and avoid doing disk operations, embracing IO is going to be hard…

And like a buddy of mine once said, these material scientists keep investing these new exotic technologies that keep us system software engineers busy.

It’s a good time to work in systems.

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Hardware, Software, Storage

It wasn’t me, it was my neighbor

December 23, 2014 by kostadis roussos 1 Comment

Keith Adams of Facebook pointed me to this paper:

Flipping Bits in Memory Without Accessing Them:
An Experimental Study of DRAM Disturbance Errors

The fun part is that we have a new and exciting way to induce memory corruption. Reading memory.

The interesting part is the proposal of a probabilistic algorithm to address the issue.

This continues my enduring belief that reliance on reliable hardware to make software work is increasingly a fools errand. As we patch more and more of the cracks, eventually we will have to stare at the chasms and rethink software design.

In the meantime, the next time I get a memory corruption I am pointing my boss to this paper.

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Hardware, innovation

Hardware not Software is Eating the World: Len was right.

November 28, 2014 by kostadis roussos 1 Comment

In 1996, Len Widra, a Principal Engineer at SGI, and I got into a heated argument over the importance of software vs. hardware.

I was a kid out of school with an ego to match. Of course, I thought I knew everything. The crux of our debate was whether the software was a relevant technology or important technology.

Len’s observation was that software was irrelevant or something like that. Hardware, he observed, was the important technology.

As a software engineer, this was infuriating. As a computer scientist, this wasn’t very kind. How dare he say that a bunch of silicon was more important than my code?

It’s been almost 18 years, and I’ve learned more.

What I have learned is that new software rarely, if ever, displaces old software unless some new hardware shows up. New hardware shows up, and that new hardware makes the old software irrelevant or obsolete.

There is one interesting caveat. Some software applications are really dependent on the quality of the algorithms, and as the algorithms improve, the software gets obsoleted regardless of the underlying hardware changes. In many cases, the emergence of new algorithms creates new hardware that helps obsolete the old software.

For the vast majority of software systems, however, that’s not the case.

When you are looking for a new opportunity in the technology space, what you need to look for is where new hardware is emerging. If it is sufficiently different, that new hardware will obsolete the old software that was tied to the new hardware creating new opportunities for new software.

A mouthful.

A few examples:

(1) the emergence of x86 servers created the opening for Linux. Before x86 servers were a reality, the UNIX vendors owned the entire software and hardware stack. When x86 became good enough, a new software stack could win because the software used new hardware.

(2) Flash in the storage industry has truly created a massive disruption, enabling many different kinds of software stacks.

(3) Merchant (aka Broadcomm) Silicon is disrupting the networking space that reminds me of the x86 disruption.

(4) ARM processors made mobile computing plausible.

Maybe my favorite example is this picture from TIOBE Software that measures the popularity of programming languages. TIOBE measures popularity – not use or lines of code – and has been doing that analysis for many years:

2014-11-28_0849

You look at the chart, and you realize how slowly programming language popularity changes except for one programming language: Objective-C. The popularity of a single programming language changed dramatically not because it was good or bad but because of a single new hardware platform that enabled new software.

The hardware disrupts because it enables software that was impossible before. The carefully calibrated trade-offs that are baked into a system are tossed into the sea with new hardware. When you want to look for disruptions to your business, never look at software; software is irrelevant; look at the hardware …

 

Share this:

  • Click to email a link to a friend (Opens in new window) Email
  • Click to share on Reddit (Opens in new window) Reddit
  • Click to share on X (Opens in new window) X
  • Click to share on Tumblr (Opens in new window) Tumblr
  • Click to share on Facebook (Opens in new window) Facebook
  • Click to share on LinkedIn (Opens in new window) LinkedIn
  • Click to share on WhatsApp (Opens in new window) WhatsApp

Like this:

Like Loading...

Filed Under: Hardware, innovation, Software Tagged With: Disruption

 

Loading Comments...
 

    %d