Network Solutions SUED for front running!

Just a brief follow-up to my last post – looks like those bastards are getting sued for being such pieces of sh*t. SWEET.

I only hope that justice can be served in an appropriate and fair way. For instance, the entire executive team at Network Solutions could be drawn and quartered.

Network Solutions’ domain name front running – the monopoly that wouldn’t die

It seems that the once-monopolistic domain registrar, Network Solutions, has decided they need more power again. Domain Name Wire’s article reads like a bizarre April Fool’s joke at first glance, but it’s true. I tried it out with sweettemplatesforphp.com just for kicks, and those bastards really did park the domain.

Their motives almost seem genuine: “This is a customer protection measure to protect customers from frontrunners. After four days, we release the domain.” says Network Solution’s spin doctor PR spokeswoman, Susan Wade.

But if this is truly their goal, why is there no mention of it when you do a search? Why is there no option to skip it? Why the hell isn’t there a giant blinking warning? “IF YOU SEARCH FOR A DOMAIN WE’LL F*CKING SNAG IT FOR FOUR DAYS SO YOU CAN’T SHOP AROUND!”

I get it that they aren’t forcing you to pay a premium to register the domain from them. They’re just “safeguarding” it from the real front runners. But the thing is, they’re guaranteeing that if I do a search for a domain, I can not shop around for prices without going through this BS waiting period. A much more elegant solution (if they really want one, which I suspect they do not) would be a little checkbox:

Protect this domain from front running?

If it takes me two minutes to come up with a solution that isn’t controversial, it can’t be that hard….

Sloccount’s sloppy slant – or – how to manipulate programming projects the Wheeler Way

Sloccount is my newest Awesome Software Discovery. It’s a great idea, but is far too simple to do what it claims: estimate effort and expense of a product based on lines of code. And really, I wouldn’t expect it to be that great. The model used to estimate effort is certainly not the author’s fault, as it isn’t his model. But that idiot (David Wheeler) doesn’t just say it’s a neat idea – he actually uses this horrible parody of good software to “prove” that linux is worth a billion dollars. For the record, I prefer linux for doing any kind of development. I hate Windows for development that isn’t highly visual in nature (Flash, for instance kind or requires Win or Mac), and Macs are out of my price range for a computer that doesn’t do many games. So Linux and I are fairly good friends. I just happen to be sane about my liking of the OS. (Oh, and BSD is pretty fracking sweet, too, but Wheeler didn’t evaluate it, so neither will I)

The variables

To show the absurdity of sloccount, here’s a customized command line that is assuming pretty much the cheapest possible outcome for a realistic project. The project will be extremely easy for all factors that make sense in a small business environment. We assume an Organic model as it is low-effort and most likely situation for developing low-cost software.

Basically I’m assuming a very simple project with very capable developers. I’m not assuming the highest capabilities when it comes to the dev team because some of that stuff is just nuts – the whole team on a small project just isn’t likely to be having 12+ years experience, and at the top 10% of all developers. But the assumptions here are still extremely high – team is in the top 75% in all areas, and 6-12 years of experience, but pay is very low all the same. This should show a pretty much best-case scenario.

Also, I’m setting overhead to 1 to indicate that in our environment we have no additional costs – developers work from home on their own equipment, we market via a super-cheap internet site or something (or don’t market at all and let clients do our marketing for us), etc.

Other factors (from sloccount’s documentation ):

  • RELY: Very Low, 0.75
    • We are a small shop, we can correct bugs quickly, our customers are very forgiving. Reliability is just not a priority.
  • DATA: Low, 0.94
    • Little or no database to deal with. Not sure why 0.94 is the lowest value here, but it is so I’m using it.
  • CPLX: Very Low, 0.70
    • Very simple code to write for the project in question. We’re a small shop, man, and we just write whatever works, not whatever is most efficient or “cool”.
  • TIME: Nominal, 1.00
    • We don’t worry about execution time, so this isn’t a factor for us. Assume we’re writing a GUI app where most of the time, the app is idle.
  • STOR: Nominal, 1.00
    • Same as time – we don’t worry about storage space or RAM. We let our users deal with it. Small shop, niche market software, if users can’t handle our pretty minimal requirements that’s their problem.
  • VIRT: Low, 0.87
    • We don’t do much changing of our hardware or OS.
  • TURN: Low, 0.87
    • I don’t know what this means, so I’m assuming the best value on the grid.
  • ACAP: High, 0.86
    • Our analysts are good, so we save time here.
  • AEXP: High, 0.91
    • Our app experience is 6-12 years. Our team just kicks a lot of ass for being so underpaid.
  • PCAP: High, 0.86
    • Again, our team kicks ass. Programmers are very capable.
  • VEXP: High, 0.90
    • Everybody kicks ass, so virtual machine experience is again at max, saving us lots of time and money.
  • LEXP: High, 0.95
    • Again, great marks here – programmers have been using the language for 3+ years.
  • MODP: Very High, 0.82
    • What can I say? Our team is very well-versed in programming practices, and make routine use of the best practices for maintainable code.
  • TOOL: Very High, 0.83
    • I think this is kind of a BS category, as the “best” system includes requirements gathering and documentation tools. In a truly agile, organic environment, a lot of this can be skipped simply because the small team (like 2-3 people) is so close to the codebase that they don’t have any need for complexities like “proper” requirements gathering. Those things on a small team can really slow things down a lot. So I’m still giving a Very High rating here to reflect speedy development, not to reflect the grid’s specific toolset. For stupid people (who shouldn’t even be reading this article), this biases the results against my claim, not for it.
  • SCED: Nominal, 1.00
    • Not sure why nominal is best here, but it’s the lowest-effort value so it’s what I’m choosing. Dev schedules in small shops are often very flexible, so it makes sense to choose the cheapest option here.

So our total effort will be:

0.75 * 0.94 * 0.70 * 1.00 * 1.00 *                # RELY - STOR
0.87 * 0.87 * 0.86 * 0.91 * 0.86 *                # VIRT - PCAP
0.90 * 0.95 * 0.82 * 0.83 * 1.00 *                # VEXP - SCED
2.3                                               # Base organic effort

= 0.33647 effort

We’re also going to assume a cheap shop that pays only $40k a year to programmers, because it’s a small company starting out. Or the idiot boss only pays his kids fair salaries. Or something.

Command line:

sloccount --overhead 1 --personcost 40000 --effort 0.33647 1.05

Bloodsport Colosseum

For something simple like Bloodsport Colosseum, this is an overly-high, but acceptable estimate. With HTML counted, the estimate is 5.72 man-months. Without, it’s 4.18 man-months. We’ll go with the average since my HTML counter doesn’t worry about comments, and even with rhtml having embedded ruby, the HTML was usually easier than the other parts of the game. So this comes to 4.95 months. That’s just about 21 weeks (4.95 months @ 30 days a month, divided by 7 days a week = just over 21). At 40 hours a week that would work out to 840 hours. I spent around 750 hours from start (design) to finish. I was very unskilled with Ruby and Rails, so this estimate being above my actual time is certainly off (remember I estimated for people who were highly skilled), and a lot of the time I spent on the project was replacing code, not just writing new code. But overall it’s definitely an okay ballpark figure.

When you start adding more realistic data, though, things get worse.

If you simply assume the team’s capabilities are average instead of high (which is about right for BC), things get significantly worse, even though the rest of the factors stay the same:

0.75 * 0.94 * 0.70 * 1.00 * 1.00 *                # RELY - STOR
0.87 * 0.87 * 1.00 * 1.00 * 1.00 *                # VIRT - PCAP
0.90 * 0.95 * 0.82 * 0.83 * 1.00 *                # VEXP - SCED
2.3                                               # Base organic effort

= 0.4999 effort

This changes our average from 4.95 man-months to 7.3 months, or about 31 weeks. That’s 1240 hours of work, well more than I actually spent. From design to final release, including the 1000-2000 of lines of code that were removed and replaced (ie, big effort for no increase in LoC), I spent about 40% less time than the estimate here.

…And for the skeptics, no, I’m not counting the rails-generated code, such as scripts/*. I only included app/, db/ (migration code), and test/.

However, this still is “close enough” for me to be willing to accept that it’s an okay estimate. No program can truly guess the effort involved in any given project just based on lines of code, so being even remotely close is probably good enough. The problem is when you look at less maintainable code.

Just for fun, you can look at the dev cost, which is $21k to $28k, depending on whether you count the HTML. I wish I could have been paid that kind of money for this code….

Murder Manor

This app took me far less time than BC (no more than 150-200 hours). I was more adept at writing PHP when I started this than I was at writing Ruby or using Rails when I started BC. But the overall code is still far worse because of my lack of proper OO and such. So I tweak the numbers again, to reflect a slightly skilled user of the language, but worse practices, software tools, and slightly more complex product (code was more complex even though BC as a project had more complex rules. Ever wonder why I switched from PHP for anything over a few hundred lines of code?):

0.75 * 0.94 * 0.85 * 1.00 * 1.00 *                # RELY - STOR
0.87 * 0.87 * 1.00 * 1.00 * 1.00 *                # VIRT - PCAP
0.90 * 0.95 * 1.00 * 1.00 * 1.00 *                # VEXP - SCED
2.3                                               # Base organic effort

WHOA. Effort jumps to 0.8919! New command line:

sloccount --overhead 1 --personcost 40000 --effort 0.8919 1.05

This puppy ends up being 3.4 months of work. That’s 14.5 weeks, or 580 hours of work — around triple my actual time spent!

Looking at salary info is something I tend to avoid because as projects get big, the numbers just get absurd. In this case, even with a mere 3500-line project, the estimate says that in the environment of cheap labor and no overhead multiplier, you’d need to pay somebody over $10k to rewrite that game. Good luck to whatever business actually takes these numbers at face value!

But these really aren’t the bad cases. Really large codebases are where sloccount gets absurd.

Big bad code

Slash ’em is a great test case. It isn’t OO, is highly complex, and has enough areas of poor code that I feel comfortable using values for average- competency programmers. So here are my parameters, in depth:

  • RELY: Very Low, 0.75
    • Free game, so not really any need to be highly-reliable.
  • DATA: Nominal, 1.00
    • The amount of data, in the form of text-based maps, data files, oracle files, etc. is pretty big, so this is definitely 1.00 or higher.
  • CPLX: Very High, 1.30
    • Complex as hell – the codebase supports dozens of operating systems, and has to keep track of a hell of a lot of data in a non-OO way. It’s very painful to read through and track things down.
  • TIME: High, 1.11
    • Originally Nethack was built to be very speedy to run on extremely slow systems. There are tons of hacks in the code to allow for speeding up of execution even today, possibly to accomodate pocket PCs or something.
  • STOR: Nominal, 1.00
    • I really can’t say for sure if Slash ‘Em is worried about storage space. It certainly isn’t worried about disk, as a lot of data files are stored in a text format. But I don’t know how optimized it is for RAM use – so I choose the lowest value here.
  • VIRT: Nominal, 1.00
    • Since the app supports so many platforms, this is higher than before. I only chose Nominal because once a platform is supported it doesn’t appear its drivers change regularly if at all.
  • TURN: Low, 0.87
    • Again, I don’t know what this means, so I’m assuming the best value on the grid.
  • ACAP: Nominal, 1.00
    • Mediocre analysts
  • AEXP: Nominal, 1.00
    • Mediocre experience
  • PCAP: Nominal, 1.00
    • Mediocre programmers
  • VEXP: Nominal, 1.00
    • Okay experience with the virtual machine support
  • LEXP: Nominal, 1.00
    • Mediocre language experience
  • MODP: Nominal, 1.00
    • The code isn’t OO, which for a game like this is unfortunate, but overall the code is using functions and structures well enough that I can’t really complain about a lot other than lack of OO.
  • TOOL: Nominal, 1.00
    • Again, nominal here – the devs may have used tools for developing things, I really can’t be sure. I know there isn’t any testing going on, so I can be certain that 1.00 is the best they get.
  • SCED: Nominal, 1.00
    • The nethack and slash ’em projects are unfunded, and have never (as far as I can tell) worried about a release schedule. Gotta choose the cheapest value here.

Total:

0.75 * 1.00 * 1.30 * 1.11 * 1.00 *                # RELY - STOR
0.87 *                                            # TURN (the rest are 1.00)
2.3                                               # Base organic effort

Total is now 2.166 effort. New command line, still assuming cheap labor and no overhead:

sloccount --overhead 1 --personcost 40000 --effort 2.166 1.05

Slash ‘Em is a big project, no doubt about it. But the results here are laughable at best. The project has 250k lines of code, mostly ansi c. The estimate is that this code would take nearly 61 man-years of effort. The cost at $40k a year would be almost $2.5 million! With an average of just under 24 developers, the project could be done in two and a half years.

I worked for a company a while ago that built niche-market software for the daycare industry. They had an application that took 2-3 people around 5 years to build. It was Visual C code, very complex, needed a lot more reliability than Slash ‘Em, was similar in size (probably closer to 200k lines of code), and had a horrible design process in which the boss would change his mind about which features he wanted fairly regularly, sometimes scrapping large sections of code. That project took at most 15 man-years to produce. To me, the claim that Slash ‘Em was that much bigger is a great reason to make the argument that linux isn’t worth a tenth what Wheeler claims it is. Good OS? Sure. But worth a billion dollars??

Linux and the gigabuck

I’m just not sure how anybody could buy Wheeler’s absurd claim that Linux would cost over a billion dollars to produce. Sloccount is interesting for sure, particularly for getting an idea of one project’s complexity compared to another project. But using the time and dollar estimates is a joke.

Wheeler’s own BS writeup proves how absurd his claims are: Linux 6.2 would have taken 4500 man-years to build, while 7.1, released a year later, would have taken 8000 man-years. I’m aware that there was a lot of new open source in the project, and clearly a small team wasn’t building all the code. But to claim that the extra 13 million lines of code are worth 3500 years of effort, or 400 million dollars…. I dunno, to me that’s just a joke.

And here’s the other thing that one has to keep in mind: most projects are not written 100% in-house. So this perceived value of Linux due to the use of open source isn’t exclusive to Linux or open source. At every job I’ve had, we have used third-party code, both commercial and open source, to help us get a project done faster. At my previous job, about 75% of our code was third-party. And in one specific instance, we paid about a thousand dollars to get nearly 100,000 lines of C and Delphi code. The thing with licensing code like this is that the company doing the licensing isn’t charging every user the value of their code – they’re spreading out the cost to hundreds or even thousands of users so that even if their 100k lines are worth $50k, they can license the code to a hundred users at $1000 a pop. Each client pays 2% of the total costs – and the developmers make more money than the code is supposedly worth. And clearly this saves a ton of time for the developer paying for the code in question.

If you ignore the fact that big companies can use open source (or commercially-licensed code), you can conjure up some amazing numbers indeed.

I can claim that Bloodsport Colosseum is an additional 45 months of effort simply by counting just the ruby gems I used (action mailer, action pack, active record, active support, rails, rake, RedCloth, and sqlite3-ruby). Suddenly BC is worth over $175k (remember, labor is still $40k a year and I am still assuming a low-effort project) due to all the open source I used to build it.

Where exactly do we draw the line, then? Maybe I include all of Ruby’s source code since I used it and its modules to help me build BC. Can I now claim that BC is worth more than a million dollars?

Vista is twice as good as Linux!

As a final proof of absurdity, MS has a pretty bad track record for projects taking time, and the whole corporate design/development flow slowing things down. Vista is supposed to be in the realm of 50 million lines of code. Using the same methods Wheeler used to compute linux’s cost and effort, we get Vista being worth a whole hell of a lot more:

Total physical source lines of code:                    50,000,000
Estimated Development Effort in Man-Years:              17,177
Estimated cost (same salaries as linux estimate,        $2.3 billion
  $56,286/year, overhead=2.4)

To me these numbers look just as crazy as the ones in the Linux estimate, but MS being the behemoth it is, I’m not going to try and make a case either way. Just keep in mind that MS would have had to dedicate almost 3,000 employees to working on Vista full-time in order to get 17,177 years of development done in 6.

The important thing here is that by Wheeler’s logic, Vista is actually worth more than linux. By a lot.

Linux fanatics are raving idiots

So all you Linux zealots, I salute you for being so fiercely loyal to your favorite OS, but coming up with data like this (or simply believing in and quoting it) just makes linux users appear a ravenous pack of fools. Make your arguments, push your OS, show the masses how awesome Linux can be. But make sound arguments next time.

digg this!

The move to typo 4.0

Typo is my blogging software. Written in Ruby on Rails, it seemed like an ideal choice for me since I’m a big fan of the RoR movement. But like so many other open source applications, Typo has got some major problems. I’m not going to say another open source blog would have been better (though I suspect this is true from other pages I’ve found on the net), but Typo has been a major pain in the ass to upgrade.

For anybody who has to deal with this issue, I figure I’ll give a nice account here.

First, the upgrade tool is broken. If you have an old version of typo that has migrations numbered 1 through 9 instead of 001 through 009, you get conflicts during the attempt at migrating to the newest DB format. You must first delete the old migrations, then run the installer:

rm /home/user/blog_directory/db/migrations/*
typo install /home/user/blog_directory

Now you will (hopefully) get to the tests. These will probably fail if, like me, your config/database.yml file is old and doesn’t use sqlite. Or hell, if it does use sqlite but your host doesn’t support that. Anyway, so far as I’m concerned the tests should be unnecessary by the time the Typo team releases a version of Typo to the public.

Next, if you have a version of typo that uses the components directory (back before plugins were available in Rails, I’m guessing), the upgrade tool does not remove it. This is a big deal, because some of the components that are auto-loaded conflict with the plugins, causing all sorts of stupid errors. That directory has to be nuked:

rm -rf /home/username/blog_directory/components

This solves a lot of issues. I mean, a lot. If you’re getting errors about the “controller” method not being found for the CategorySidebar object, this is likely due to the components directory.

Another little quirk is that when Typo installs, it replaces the old vendor/rails directory with the newest Rails code. But it does not remove the old code! This is potentially problematic, as I ended up with a few dozen files in my vendor/rails tree that weren’t necessary, and may have caused some of my conflicts (I never was able to fully test this and now that I have things working, I’m just not interested). Very lame indeed. To fix this, kill your rails dir and re-checkout version 1.2.3:

rm -rf /home/username/blog_directory/vendor/rails
rake rails:freeze:edge TAG=rel_1-2-3

My final gripe was the lack of even a mention that older themes may not work. I had built a custom typo theme which used some custom views. But of course I didn’t know it was the theme until I spent a little time digging through the logs to figure out why things were still broken. Turned out my theme, based on the old Azure theme and some of the old view logic for displaying articles, was trying to hit code that no longer existed. Yes, my theme was using an old view and the old layout, both of which were hitting no-longer-existing code. But better API coding for backward compatibility would have made sense, since they did give you the option to use a theme to override views and layouts. Or at the very least, a warning would have been real nice. “Danger, danger, you aren’t using a built-in theme! Take the following precautions, blah blah blah, Jesus loves you.”

How do you fix the theme issue, though, if you can’t even log in to the blog to change it? Well, like all good programmers who are obsessively in love with databases, the typo team decided to store the config data in the database. And like all bad open-source programmers, they stored that data in an amazingly stupid way. I like yaml, don’t get me wrong – it’s amazingly superior to that XML crap everybody wants to believe is useful. But in a database, storing data in yaml format seems just silly.

<rant>

PEOPLE, LISTEN UP, if you’re going to store config that’s totally and utterly NOT relational, do not use a relational database. It’s simple. Store the config file as a yaml file. If you are worried about the blog being able to write to this file, fine, store your data in the DB, but at least store it in a relational sort of way. Use a field for each config directive if they’re not likely to change a lot, or use a table that acts like a hash (one field for blogid, one for settingname, one for setting_value). But do something that is easy to deal with via SQL. Show me the SQL to set my theme from ‘nerdbucket’ to ‘azure’ please. When you can’t use your database in a simple, straightforward way, you’ve fracking messed up. Yes, there are exceptions to every rule, but this blog software is not one of them. It would not have been hard to store the data in a neutral format that would make editing specific settings easy.
</rant>

Sorry. Anyway, how to fix this – the database has a table called “blogs” that has a single record for my blog. This record stores the base url and a bunch of yaml for the site config. You edit the field “settings” in the blogs table, and change just the part that says “theme: blah” to “theme: azure”. If you don’t have access to a simple tool like phpmyadmin, then you’ll likely have to retrieve the value from your mysql prompt, edit it in the text editor of your choice, and then reset the whole thing, making sure to use newlines properly so as not to screw up the yaml format…. Then you are up and running and can worry about fixing the theme at your leisure.

Now, to be fair, I think I could have logged in to the admin area without fixing my theme, and then fixed it there. But with all the problems I was having, I thought it best to set the theme in the DB to see if that helped get the whole app up and running. Obviously it wasn’t the theme that was killing my admin abilities (and I can’t even remember anymore what it was). But once I hit that horrible config storage, my stupidity felt ever so much smarter compared to the person who designed typo’s DB.

Typo is pretty sweet when you don’t have to delve under the hood. But “little” things like that can make or break software, and I hope to <deity of your choice> that the next upgrade is a whole lot smoother.


UPDATE UPDATE HOORAY

One more awesome annoyance. It seems all my old blog articles are tagged as being formatted with “Markdown”. When I created them, I formatted them with “Textile”. If you’re not up on these two awesome formatting tools, take a look at a simple example (the first is how Textile appears when run through the Markdown formatter):

  • This “website”:http://www.nerdbucket.com is really sweet, dude!
  • This website is really sweet, dude!

I’ve been using Markdown lately as I kind of prefer it now. But my old articles are in Textile format. I don’t know why upgrading my fracking blog loses the chosen format, but boy is it fun going through old articles and fixing them!!

Digg this!

How not to benchmark software

I’ve just stumbled upon an amazingly misinformed benchmark about Flex, from an actual Adobe employee, Matt Potter.

This guy benchmarks JSON, AMFPHP, and XML as ways to transmit data between PHP and a Flex app. His findings show that XML is generally faster than either JSON or AMFPHP. This “discovery” could revolutionize the way we send and receive data on the net! Who’d have thought that XML is so efficient? Truly amazing results!

But if we choose to drop back down to Earth from the blissful land of Ignoramia, we may find that even Adobe devs can make horrible mistakes.

So why is this year-old article worth dissecting? Simple – it comes up FIRST when you search google for “json flex”, which makes it a great tool of misinformation for people looking for ways to incorporate the awesomeness of JSON into flex! Note that if it were a random article that was at least moderately hard to find, I probably wouldn’t care too much.

So Matt Potter compares XML, AMFPHP, and JSON. His first and most amazing mistake is that he’s using raw XML, but converting data structures in PHP into JSON and AMFPHP. XML is expensive to create as well as to read, so skipping that step completely invalidates his article in my opinion. But worse still, he tests against a local server. The network overhead of XML is going to be significantly worse than JSON in most cases (no idea about AMFPHP as I’ve never used it), so ignoring the 2-3x bigger data really doesn’t do much for providing a valid test.

One of the comments also mentions that there’s a PHP extension for JSON that’s better than what Matt used, and Matt’s response: “I used the Zend Framework instead of the json php extension because I really think that the Zend Framework is the easiest to setup and use, and I have other examples of using the ZF that I’m going to publish”. So instead of looking for the best tool for the job, he went with the easiest tool. But for XML testing he went with the hardest but most efficient “tool”: manual creation of XML with no conversion from objects, no use of XML creation tools, nothing.

I dislike ignorance when one tries to present facts, but this article actually makes me suspicious that his intention was to “prove” XML was the best technology of the three, and was willing to manipulate data in any way necessary to provide evidence. It’s pretty despicable to have a position of influence (adobe employee) and abuse it to prove a totally BS point.

It should be noted that some of the comments, particularly Blaine McDonnell’s, ripped Matt apart better than I can. But when ignorance and/or deception rear their ugly head, it never hurts to point them out one last time.

Boycott Ebay!

I use Magic the Gathering Online as a source of supplemental income. I am not “poor”, but with a wife, two kids, and a lot of medical bills for one child, my income is stretched extremely thin. Every month we might be able to save $20-40 for future emergencies. So when I found that I could make $20-40 extra in any given month via MtGO, I jumped at the chance – that’s doubling our current disposable income!

As many people are aware, ebay has decided to stop allowing sales of “digital items” in which the seller does not own the rights to the items in question, or is not authorized to trade those items. As an avid player of Magic the Gathering Online, I made sure Wizards of the Coast does indeed allow such transactions. WotC’s website even has a forum for conducting trades with other players, for cash or other in-game items. Their “code of conduct” states that the player is allowed to trade digital goods. I did my homework.

I was warned to no longer list digital items from this game, and my auctions were deleted. I sent several emails back and forth explaining that WotC allows MtGO properties to be traded (they even offer a forum on their site for this specific purpose). The responses were always the same – don’t list digital goods you aren’t authorized to list. Eventually they flat out told me they were no longer listening to my emails, at which point I told them that I would continue listing auctions until a written policy stated I could not do this.

Interestingly, it was only after this final email from me, in which I was excessively blunt and probably a bit rude, that they banned me. The ban appears to have happened within an hour or two of my final email, which seems pretty suspicious to me. They say they’ll lift the ban within seven days (assuming I fill out a form and jump through a bunch of hoops), but I don’t see the point – they’ll just continue to bully me in order to prove a point or something.

I’m keeping a record of the email that’s been happening, and I hope that even though my blog is very small that I can get some support. Check out the emails if you’re interested: “http://blog.nerdbucket.com/pages/ebaybs”:http://blog.nerdbucket.com/pages/ebaybs

And check these guys out: “http://www.ebaypigs.com/default.asp”:http://www.ebaypigs.com/default.asp

PCI Evils

“PCI compliance”:http://en.wikipedia.org/wiki/PCI_DSS is a good idea. In theory. At my job we’re adopting all these standards to make all our users’ experiences better, which is really a great thing. But just like every other “good idea in theory”, this one is being abused in horribly stupid ways.

As a professional web programmer who actually cares about keeping my job, I do spend the time to learn little tidbits about security from time to time. And on our team, we have a pretty effective security specialist who makes sure things like “XSS”:http://en.wikipedia.org/wiki/Crosssitescripting and “SQL Injection”:http://en.wikipedia.org/wiki/SQL_injection aren’t going to bring down our rather important e-commerce sites. I’m not even half as knowledgeable as this guy, but I still consider myself a proficient web security person. So to me, being treated like I don’t even know the definition of “security” can be a bit frustrating.

Recently we had a required meeting for PCI compliance. It was not something anybody could get out of, not even our security specialist who regularly attends security conferences. Okay, right, people need to know about security issues. Fine, we’ll all go to Security 101 and be able to have a quick laugh, right?

Yeah, more like a long and somehow excruciatingly-painful laugh. We learned the following things, no joke: * Do not store passwords on a piece of paper under your desk. * Do not hold the locked doors open for people who clearly don’t belong in the building. * If you lose a laptop that contains customer data, such as credit card numbers, report it to a manager. (I don’t know what the other options for this situation could even BE… pretend you still have said laptop by building one out of cardboard boxes?) * If you see a total stranger sitting at somebody’s desk whom you know is not that total stranger, you should report it.

There were other points to learn. Something about “report any anomaly that isn’t normally there” (isn’t that kind of the definition of anomaly?), though that’s not stupid advice as much as a funny way to word the stupid advice.

Then there’s the “don’t ask don’t tell” security policies. One of the speakers talked about how he had to fire a guy who was using some cracking software to test the strength of user passwords. Because, you know, he was using EVIL HACKER SOFTWARE, by golly! The speaker actually said, “He was using software to test the strength of passwords, and while he claimed it was a security test, that’s something hackers do.” Don’t get me wrong, maybe the dude was a malicious hacker (yes, not all “hackers” are malicious), but I’d have liked to hear why our illustrious consultant friend was so sure of this guy’s evil-doing ways…. I had a recent experience that was similar, so the subject is a bit of a sore spot. There was no firing, but I was “talked to” by an exec for having tested and then written up a report to my managers when I discovered some security problems. I guess I made the mistake of actually verifying that my hunch was correct – verification required me to H4X0R other people’s accounts (with their consent, mind you).

Back on topic… so not only was this class totally below everybody in my department, but the only lesson I learned was that you never, ever, point out security flaws that look too technical in nature, otherwise you’re a suspected hacker. Awesome message, PCI consultants! I salute you!

I guess what I’m saying here is that PCI compliance is a great thing when it comes to the big picture – store credit card data safely, don’t store the CVC data at all (the little 3-5 digit number on the back of your card), never send unencrypted customer data anywhere, etc. But once you bring consultants into the mix, every good idea turns to shit.

I’m starting to think that lawyers spawned technology consultants….

Netflix kinda sucks

This is an actual email conversation. Even if netflix offers to blow me, I’m switching. Just thought the world should know that however sucky Blockbuster may be, avoiding them is not worth taking it in the bum time and again.


I authored the original email, quoted below.  I'm amazed at the lack of
care for your customers.  Not only did you reply to the wrong person
(reading the ENTIRE email would have told you who the message actually
came from), but you didn't even address our concerns.  Your service
suddenly took a dive and all you can do is give us a canned response
about the USPS?  Undeliverable mail?  We've been getting mail at our
address for a very long time without incident, so don't go blaming your
company's problems on the USPS without at least some kind of
explanation.

1 - Two movies in a row were "lost", and one miraculously reappeared. 2 - One movie (Underworld) was shipped incorrectly. This issue just sort of disappeared and wasn't even acknowledged in your message.

And even if the screwup for THREE MOVIES (two of which were nearly back- to-back) is somehow truly USPS's fault, it still might be a good idea from a customer service perspective to try to appease the customer who has just mentioned that they're ready to switch service!

Anyway, thanks for cementing our decision for us. We'll be cancelling as soon as we get our queues set up and whatnot. I look forward to seeing how your unique views on customer retention work out.

Jeremy Echols

On 10/6/06, Karen Echols Photography karen@karenechols.com wrote:

Hi Karen,

Thanks for your inquiry.

We appreciate you letting us know that you never received the movie, yet it was checked back into our warehouse. It is the policy of the USPS to return mail that is undeliverable. The most common reasons are: the mail was damaged in processing and the label was illegible, the mailer cover became separated from the rest of the envelope, or an error with the USPS known as "Looping" occurred where the movie was returned to us in error instead of being delivered.

If you still wish to view this title, please feel free to add it to your Rental Queue.

If you have any further questions or concerns, please feel free to contact us.

Thanks, Jennifer Netflix Customer Service

-----Original Message----- From: karen@karenechols.com Sent: Friday, October 06, 2006 4:53:07 PM To: customerservice@netflix.com Subject: Shipping and Receiving Movies: Other

Subject: contact customer service To whom it may concern:

We've been with Netflix now for over a year, and have always been willing to put up with the throttling because we just didn't see the switch to Blockbuster as a viable option. Recently, however, we've had one movie shipped incorrectly (with no attempt so far to correct the issue) and two movies go missing.

I had Underworld, 1986 (crime / comedy), on my queue and got Underworld, 2003 (vampire / werewolf). I reported it as being in the wrong sleeve, and yet it never got sent back to me.

After I reported "Run Lola Run" missing, it mysteriously got returned (I never had it, so how could it have been returned?), according to your status report, and it cost me about a week and a half of waiting for you to get the movie to me for what you claimed was the second time.

Now we've got "The Sentinel" suddenly missing yet again. Wonder how long before you claim we've returned that one, too?

If you can't find some way to explain your service problems and actually, God forbid, communicate with your customers, don't be surprised when we switch to the inferior service at Blockbuster. I'd rather get less, but get what's advertised than keep getting these random problems that I can't even adequately talk to a real person about. Jeremy Echols [Address & Phone Number]

Email/IM bad for communication?

I keep reading news articles about the inferior nature of email, instant messaging, web forums, and other forms of non-verbal communications. I have finally found the source of one of these articles and have found some very interesting details out.

The news article I’m referencing is “Your Emails Aren’t As Funny As You Think”:http://www.digitaljournal.com/news/?articleID=4787, which is based on the research in “Egocentrism over E-Mail”:http://www.gsb.uchicago.edu/fac/nicholas.epley/Krugeretal05.pdf. The research study is mentioned in various places throughout the internet, but I need to stay focused if I want to tear both the research and the article apart.

The general belief I’ve gotten from the research is that when you’re communicating verbally, you can read body language and hear tone and voice inflection to get a better idea of the true meaning of the message – sarcasm, annoyance, flirtiness, humor, etc. In email, for instance, there is no body language and no tone to be read.

h2. The flaws in the research study

I have found that the study has made some decent points, but seems to very specifically avoid certain aspects of email communication that would likely have helped show email in a better light.

First off, I’m not talking about study 5 here. This is a study where they took preselected Jack Handey quotes from Saturday Night Live to see how often people’s humor meshed with somebody else’s, when the message was shown in video vs. email. I’m just not sure the point of the study when the humor is not from the actual person. I don’t think anybody will question that humor is better spoken by a practiced comedian than sent in email by an anonymous stranger, so if that’s all they meant to prove, I think they wasted their time.

h3. Most of the studies relied on pre-written text

Studies 1, 2, and 4 had people deliver a certain sentence word for word. These were to be delivered in a certain tone: sarcastic, serious, angry joking, maybe a couple others. These sentences were not divulged (I think that without these sentences, it’s very difficult to determine the validity of the tests), so I can’t comment on how useful they may have been… but think about this: if the sentence is “Your mother is such a bitch for making you pay for your own car”, it’s totally ambiguous whether the speaker/emailer is being sarcastic if all you do is read the text. If I were trying to say that sarcastically, I would probably start off with “Oh yeah, your mother is just such a bitch…” and end with “God forbid….”. Maybe even laugh.

With vocal inflections, I can certainly make my meaning clearer no matter the full text I read. But in email, we rely on things like context to make a message’s intent more clear. Even in normal vocal conversation, we change our message to indicate a different tone, though it’s not nearly as necessary.

The problem I have with these three studies is that the email group of the studies wasn’t allowed to modify the message in any way. No bolding, italicizing, capitalization, smilies, or laughter (LOL, ROFL, hehe, etc) could be added. This, in my opinion, makes for very fallible results. Look at the next section for more details….

h3. Email and IM communications most certainly can have a tone.

As I just said: there are things like bolding text, italicising text, CAPITALIZING text, and using smilies (:D, :), =), ;), :-P, etc.) to get the point across about your intent. Read the following two sentences:

I think the direction our company is headed is absolutely correct, and I’m glad to be a part of it. I won’t be looking for a new job anytime soon.

I think the direction our company is headed is absolutely correct, and I’m GLAD to be a part of it. I won’t be looking for a new job anytime soon ;).

It’s not crystal clear, but the second sentence has a different tone than the first, and will give people a better chance of “getting” the real message (“I hate the direction we’re going, and I’ve already posted my resume to monster.com”).

Now consider that some email programs (and most forums and IM clients) even have graphic smilies for showing even more specific emotions. Add an eye-rolling smilie () to that prior message, and I doubt many people will mistake the tone.

The interesting thing to note here is that the study admits that smilies (referred to as “emoticons”) might help send the right tone, but they claim that won’t make much difference.

The way they “prove” this conclusion: * Some smilies are ambiguous, such as “;-)”. Is that a happy response? Flirty? A “just kidding” response? ** This is true, but the same is true of real life! If somebody says “I like that shirt” and winks, I won’t know if they’re being friendly-but-wierd, flirty, or just kidding. * A “follow-up” study was done that allowed emoticons, and found that overconfidence wasn’t affected between the emoticon-users and non-emoticon-users. ** Um… what emoticons were used? What tones were available? What size was the group? In other words, without showing specifics about that follow-up study, how can you use it to dispute emoticons? ** Along the same lines, what was found in that study? Overconfidence may not have changed between the groups, but did accuracy change? If accuracy went up, the level of overconfidence may well not have changed, but that would very nicely prove my point about email tone!

h3. Emailing strangers will lead to more misinterpreted messages than emailing friends or even coworkers.

Study 3 “proves” my above statement incorrect. But you see, here’s where I get into context again. Familiarity is all fine and dandy, but if users can’t bold, italicize, use smilies, or otherwise convey context, then you’re not testing their ability to communicate!

In email, if I’m sarcastic, I’ll add a smilie or “p’shaw, whatev” or something. In fact, to different people my sarcasm will be different. To a good friend, I can say “Oh dude that is totally so like awesome man! I’m so stoked about it, sign me up, brotha!” My friend will know I’m being sarcastic because I don’t normally IM/Email like that. My father, on the other hand, whom I speak to more formally, won’t know how to interpret that message.

Surrounding context is even more important in my opinion. Read on…

h3. The overall conversation’s context isn’t even evaluated!

Context is incredibly important. If I’m asked to convey anger in a single sentence, I don’t know how I’d do it in a reliable way, other than “I’m really angry” (and note that this too can be interpreted many ways depending on context). Measuring the results of effective communication based on a single sentence is simply measuring the wrong thing. They’re seeing how well people can convey an emotion in a single, context-free instance. They then use those results to claim that email is inferior to verbal communication, even though in normal communication, a huge amount of interpretation is based on the context of the conversation.

If my wife and I have been joking around and she suddenly says, “You’re such a jerk!” I’ll know she’s joking, even if her tone would suggest otherwise. The same sentence, spoken similarly, could mean anger, hurt, frustration, or nothing at all. All depending on the surrounding conversation.

h3. The study is flawed by the nature that the participants knew what was being studied!

This may be a controversial statement, but I believe it’s true. Let me explain. If I yell out “You ASSHOLE! I’ll kill you!” in the meanest voice I can muster, and your options for my tone are: angry, sad, sarcastic, or joking, you’ll probably pick angry. If you know me well enough, though, you’ll know that in real conversation, if I yell that out, I’m joking. So if you and I know we’re being tested for tone, I’ll speak the way I expect the average person will understand me, and not the way I would speak in a normal conversation. In fact, I’ll likely exaggerate my speech (sarcasm: “OOOOHHHHH I’M SOOOOOOOOOOOO EXCITED”) to “get the test right”.

In a normal conversation, my angry tone is barely different than my serious tone. I don’t yell; I rarely even swear (out of anger, at least). If you had to interpret a real tone from a real conversation, you would not have nearly as easy a time, and friends and family would have a huge advantage over strangers.

My point is that the experiment should have measured tone in a different way. The speakers/emailers could have been told to create a message as if it were to various family members or friends, for a specific scenario. After typing it up and speaking/emailing, they would have been asked to rate each the overall tone of the message. It could have been presented as some study in effective communication. The recipient of the message would be asked various questions, some related to the study, most not. How well did they get their point across? Was it too wordy? Too brief? Etcetera.

h3. The Implications section of the study is flawed

So we’ve got, in my opinion, some flawed results. The idea that we communicate in a way that is egocentric makes plenty of sense, but most of the other conclusions I’ve seen are, at the very least, misguided. But the final section blows me away.

The claim is that other forms of nonverbal communication are going to be as bad as, or worse than, email. They explicitly include instant messaging.

I’m convinced this study was done by people who view email as a necessary evil, and not by people who “get” it. Their conclusions would suggest this much, but this section really convinces me. Instant message programs like AIM, Yahoo, and MSN all have some very animated smilies for conveying tone. As shown above, the eyerolling animated icon can do wonders for a message. Now imagine dozens of these, all available in one or two clicks. For those of us who like to type, these animated emoticons are even easier to put in a message.

Look at the variety of emoticons in most IM programs and tell me you can’t effectively convey sad vs. angry vs. sarcastic vs. serious. Hell, I could run a study using Yahoo Messenger where people only get to use one icon to convey those four emotions, and guarantee better results than this study….

h2. Gripes about the news

The news article that references the study is flawed as well. The one I’ve referenced above draws conclusions that aren’t in the study – they go from the SNL Jack Handey jokes losing funniness in email to the conclusion that emails you don’t find funny are inherently flawed.

This sentence is just the beginning: “According to a recent study by a trio of business scholars, people think their emails are twice as funny as they really are.” The study is talking about going from a comedian reading of a very specific joke to a FLAT EMAIL. Jokes forwarded around the internet may not be funny to a lot of people, but they tend to circulate well because they’re the kind of humor that doesn’t need to be heard! Jack Handey quotes most definitely gain a lot from their reader.

What’s more, people rated a flat reading of certain jokes at a higher level than the recipients (for instance, I rate a certain quote at 7, but the person I send it to rates that one lower because they like a different one better), showing us that flat reading -> flat reading loses something! Study 5 (the one about humor and Jack Handey) taught us that people have vastly different tastes in humor. Any given person choosing the 5 funniest Jack Handey quotes, even in text-only form, will find that, on average, other people don’t find those 5 to be the funniest! WTF does that have to do with this news article’s conclusion?

Then this article specifically mentions “Photoshopped celeb pics” and “a hilarious clip of a napping cat” as problematic emails addressed by this study. But pictures don’t fall into the boundary of email communication problems! Again, WTF?!? When (and why) did the author think to jump from email communication problems to pictures that he doesn’t find funny? How the hell do those even relate?

Then he goes as far as to say the study is “a much-needed slap in the face to the forward-frenzied emailers out there”. Whereas the study drew a lot of incorrect conclusions, and tested the wrong data, it was at least paying attention to something, and did have a lot of research behind it. The author of this article, David Silverberg, apparently didn’t even READ the damn study! He probably heard about it, wanted to make himself look clever, and chose to revel in his complete ignorance rather than actually research the facts.

As much as I disliked the lack of proper scientific method in the study, this article (and others like it) makes me sick! How can we trust any journalists anymore, when so many of them just recycle other people’s data? And fuck if they can’t even do that right!

h2. Conclusions

Email is almost certainly inferior to verbal communication. But c’mon people, let’s measure the right data next time! And Mr. Silverberg, please try doing the tiniest iota of research before you write again. Might save you from coming across as an ignorant, lazy twit. Oh wait, too late for that….

Perl vs. Ruby

Just a quick random thought about getting rid of whitespace from a string.

In perl, you trim.

In ruby, you strip.

Now which language is really sexier? You be the judge.