And this from an “industry leader” who should know better.
There are some wonderful quotes from the father of Rails that tell me he’s still the same douchebag he’s always been, and still has no idea how quickly complex projects go bad when built with his OMFG COUPLE EVERYTHING YAY mentality.
- Not all projects are simple and fit the MySQL world
- TDD didn’t invent good architecture
- Decoupling has other benefits
- DHH has probably never worked on a “real-world” project in his life
- Consulting gigs don’t count; consultants rarely maintain code long-term
Test-first units leads to an overly complex web of intermediary objects and indirection in order to avoid doing anything that’s “slow”. Like hitting the database. Or file IO. Or going through the browser to test the whole system.
Intermediary objects on a complex project can be a good thing. On Cyclops (that’s the fake name I’ve given a horrendous clusterfuck of Rails insanity I’m currently working on), we’re so tightly coupled to so many specific technologies that a dev server has to run a complex niche database, a solr server, a jetty container for these two servers (yes, it must be jetty), Redis, sqlite, and mongo. The number of technologies could certainly be reduced with a little planning and thought, but the fact is each piece serves a very specific purpose, and is necessary for the project in its current state. If we did things the “overly complex” way, adding service classes and whatnot, we could actually swap out components that don’t work without breaking the APIs. That’s the point of having these “intermediary objects” in many cases – creating a Facade or Bridge or other adapter to allow for easier code reuse, decoupling, etc.
On a giant pile of shit like Basecamp, these architectural principles probably don’t matter. When your app does basically one thing and doesn’t worry about things like being useful or having features, it’s tough to understand the importance of proper design and architecture.
On Cyclops, where a lot of technology has proven not to work well, and where we’ve had to scrap, replace, and/or rewrite huge components, proper architecture might actually have made a huge difference. But the community is big on tight coupling (way to go, anti-architecture zealots), so we have ended up with shit DHH would cream himself over, where swapping pieces out requires significant effort, and changes to one system have an absolute guarantee of fucking up something totally unrelated.
What’s funny is DHH is stupid enough to blame TDD, a relatively recent concept, for architectural principles that have been around for decades. The poor kid just has got to get back down to earth someday.
[The claim that it’s too slow to connect to external services is] not true in 2014 where you can run your MySQL or PostgreSQL database locally off a super-fast SSD.
This is naivety at its worst… it would be funny if it weren’t being said by the inventor of a huge and relatively popular framework.
I’ve worked for small companies, large companies, and two (radically) different groups within the university system. I’ve worked on a lot of projects. And this “one database to rule them all” philosophy is just plain wrong. It only works when you’re a consultant and can afford to build throw-away solutions. He’s so out of touch with real-world programming. It reminds me of Lucille Bluth asking, “How much could a banana cost? Ten dollars?”
Enterprise shops may very well run Oracle, MSSQL, or god-knows-what simply because “that’s the way it is”. They may need something like Solr or Elasticsearch, neither of which are awesome to run locally. They may need niche services that are slow and potentially proprietary, where hitting the “database” is definitely not a good idea during development, and certainly not during testing. Having a nice abstraction API that lets the dev “plug in” their database of choice can be a very good thing.
Even if you’re using MySQL today, it may be worth building service objects for an app that will live for a long time. It eases switching to completely new technologies that could pop up in a few years. You may find instantiating objects from Solr to be faster than hitting the database for some reason (Cyclops actually has to do this in some cases). You may find relational databases aren’t fitting your needs, but you don’t want to be hacking up half the app just to switch to something else. You may find that in development, mysql is fine, but in production it’s too slow and you have to switch to CouchDB or something. Having your logic layer separate from your database layer means an easier time making these kinds of changes.
I’ve worked on at least three projects that couldn’t have used a fast/local database no matter how hard I may have wanted them to. I’ve worked on two that needed to use a database that didn’t fit in the Rails world at all (Cyclops being the latest abomination). Because of DHH’s ignorant, myopic view, and the idiots who eat up his bullshit like it’s gourmet cuisine, I am indeed running a local version of the niche database, AND IT’S SLOW AS SHIT AND FUCKING AWFUL. When a single unit test can easily take 5-6 seconds due to having a shitty database, yes, building code to make faster tests is fucking worth it.
Oh, and it also might make it easier to switch to a less shitty database if we ever find something better. But DHH doesn’t care much about the future, I suppose. If he did, Basecamp might not make me cry every time I’m forced to use it.
I can run every single model test in the Basecamp suite in 80 seconds. That’s 3333 assertions, all hitting the database, all going through Active Record, and all using the killer Rails feature of test fixtures.
I just had to include this because it’s so cute. Using Basecamp as a metric for any kind of success. I guess he assumes most people reading his blog won’t have firsthand experience with that app.
I rarely unit test in the traditional sense of the word, where all dependencies are mocked out, and thousands of tests can close in seconds. It just hasn’t been a useful way of dealing with the testing of Rails applications.
Love it! I think the key here is “Rails applications”. Rails discourages good design principles, so of course building good tests is tricky.
What’s interesting is how we see mocking mentioned here first and foremost, but with all the architecture he seems to hate, you can unit test things like service objects without any mocking at all in many cases. A huge part of separating concerns is eliminating unnecessary dependencies – which as a side-effect minimizes things like stubbing and mocking.
I think it’s equally folly to unit test the view, which is a motivation that’s driving a lot of interest in Presenters (though they too have occasional legitimate uses).
Interestingly, I hate view testing, but I have found presenters to be highly
useful for um… separating concerns. 1,000-line models aren’t awesome,
especially when you have
##### TEXT OUTPUT CODE ##### and similar logical
“breaks” all over the code.
But aside from architectural stupidity, DHH seems to equate TDD with NOT doing integration testing. I have no clue where he gets this from. Gary Bernhardt, creator of Destroy all Software, showcases generally great design, blazing-fast unit tests, and integration testing. And he’s HUGE on doing things with TDD. And I have personally worked with people who are all about unit testing and TDD, but not one said we should ignore integration testing.
It’s like he wanted to make TDD look really bad, but couldn’t find any truly valid points, so he just started making shit up.
You know who else did TDD? HITLER!
DHH didn’t actually say that, but you can tell he really wanted to.
And with that, I leave you with an article that is far better than mine, and truly rips DHH a new one.
I present: Gary Bernhardt and his rebuttle, TDD, Straw Men, and Rhetoric
(I’m not in love with Gary, he’s just a very smart person)
((Okay, maybe I’m a little in love with him))