Posts

Showing posts from March, 2011

Powershell script to enable windows to capture localhost traffic in wireshark

If you want to understand why the following scripts work read this post . Otherwise just paste the following into an elevated powershell window: Setup windows networking to allow localhost capturing in wireshark: # Find the network configuration that has the default gateway. $defaultAdapter = Get-WMIObject Win32_NetworkAdapterConfiguration | ? {$_.DefaultIPGateway} if (@($defaultAdapter).Length -ne 1) {throw "You don't have 1 default gateway, your network configuration is not supported" } # Route local IP address via the default gateway route add $defaultAdapter.IPAddress[0] $defaultAdapter.DefaultIPGateway Write-Host "Start capturing on localhost by connecting to $($defaultAdapter.IPAddress[0])" Return windows networking to normal configuration: # Find the network configuration that has the default gateway. $defaultAdapter = Get-WMIObject Win32_NetworkAdapterConfiguration | ? {$_.DefaultIPGateway} if (@($defaultAdapter).Length -ne 1) {throw "Y

How did we get a 53 byte packet size in ATM?

I'll be honest, I don't know squat about ATM, but I was having lunch with this fellow , and he told me the story of the 53 byte ATM packet.  You can find more details on Wikipedia , but here’s the synopsis: (Disclaimer: I’m not an expert in ATM; nor am I trying to teach you technical details about ATM networks; so I’ll hand wave and trade off accuracy for simplicity. For example, ATM does have variable sized packets which it divides into cells, and it is the cells which are 53 bytes long. However, since the closest thing to a cell in common networks is an Ethernet packet, I’ll simply refer to cells as packets.) ATM is designed to be shared between data network applications, and voice network applications(+). In data networks we want large packets because this gives maximum efficiency.  This is because each packet has a fixed size header and thus the more data you can transmit per packet , the higher your ‘real’ throughput. For voice networks we want to reduce latency.

The cloud lets you evaluate the cost of performance optimizations

One of the things I love about cloud computing is you can put an honest price on computing time.  You can than balance the human engineering time required to optimize code (and often have more complex code) vs just paying for the cloud to do it.  The Zillow rent estimate post speaks to this brilliantly: We implemented the Rent Zestimation process as a software application taking input from Zillow databases and producing an output table with about 100 million rows. We deploy this software application into a production environment using Amazon Web Services (AWS) cloud .  The total time to complete a run is four hours using four of Amazon EC2 instances of Extra-Large-High-CPU type .  This type of machine costs $1.16/hr.  Thus, it costs us about $19 to produce 100 million Rent Zestimates which is the same as a 3-D movie ticket or about 5 gallons of gasoline in New York City today. A few things to note about this quote: If your data processing can be done in parallel, you’d have

Supply, demand and the trackball market.

The author of this blog is a devout trackball man, and as any devout trackball man can tell you, it has been trying times for trackball users in the last few years. You see trackballs were never really in style, and no one has made a new trackball for a while.  It's so bad out there that one of my favorite trackballs now sells used for over 200$ and new for 600$ . Crazier than that you can send away to get your trackball reconditioned on ebay, for a whopping 100 $, where they'll clean it and put on a new cord. Thankfully, there has been movement in the trackball market. Logitech released a new wireless trackball !!! So far I love the tiny USB receiver (which fits in the trackball when not in use). I wish the trackball was larger, but after several months of use, I can say it's a perfectly reasonable device. ps. If you're curious, here's the history of the trackball .