Top 5 logical fallacies used in technical debates

I’m an engineer, so I get in a lot of debates. I get in lots of debates on a wide variety of other topics as well. I can certainly come across as argumentative, but all I’m really seeking is a reasonable debate between valid arguments.

What I often get, instead, are arguments that are really based on nothing. I think it’s important for people in our field to understand the more common logical fallacies so they can learn when to recognize them being used… and avoid using them themselves.

#5: Slippery Slope

This is a form of ‘probability fallacy’… i.e. because X has happened 10 times in the past, it is very likely to happen in the future. This, statistically, is false. If I flip a coin 10 times and get heads, it’s still 50/50 to get heads the eleventh time.

It’s hard to convince people that this isn’t a good argument, but it isn’t. In some cases, it’s an argument about the failings of human nature. But I still think it’s just lazy… tell me why it’s a bad idea rather than convince me that, at some point in the future, it will cause bad things to happen. I am sometimes guilty of this one myself.

Example: But it we add that property to the class to solve that problem, what stops us from adding other properties? Eventually, the class will have 1000 properties!

#4: Appeal to the Masses

Also known as an ad populum argument. Just because 10 ba-jillion people do it doesn’t mean it’s right. We used to bleed people to cure diseases… that didn’t make it right. Tell me what’s good about your solution. What are it’s pros and cons? I don’t care… at all… how many people think it’s a good idea.

Example:  But everyone is using Java applets on their web pages, so we should be doing that also!

#3: Appeal to Authority

Very, very common in this internet age to become victim to this one because authority is granted upon the most random of internet characters. Having a blog does not make you an authority, in any case, and it certainly isn’t a valid argument to point to anyone to validate your solution. If you want to plagiarize that person’s arguments to make your case, that’s fine… but please don’t tell me you’re right because some guy said so.

Example:  But Scott Hanselman says that everybody should be using NuGet for everything!

#2: Straw Man

This one is used a lot and it’s good to be able to spot it when it happens. Someone takes your side of the argument, misrepresents it in the most gross way possible in order to beat it about the head and shoulders like a birthday pinata.

Example: So you want to take all that data, jam thousands of rows in the database and then just do it all with stored procedures?!? Um, no… not what I said at all.

#1: Ad Hominem

This happens all the time. Basically attacking the debater rather than the idea. This is usually a desperation move or commonly used by someone unaccustomed to defending their positions. Often takes the form of Appeal to Motive…. attacking the debater by questioning their motives.

Example:  You don’t like this solution because you just don’t understand it (because, presumably, I am an idiot). You just want to do it that way because it’s less work for you and more work for me (appeal to motive).

I think it’s good for people to know debating skills and what makes for a good argument, and what doesn’t. A good read of Robert’s Rules of Order would be nice also!

Innovation not Regurgitation

Given the rapid growth of the internet as an information resource… and our seemingly inherent human treatment of the written word as gospel… it is very easy for my fellow engineers to read a single blog entry and conclude that ‘we should be doing that!’

… and I think that’s a very serious problem.

To make matters worse, the ever increasing number of tools and frameworks leads to a dangerous tendency to reduce our engineering responsibility to ‘toolbox assembly’.  We’ll use Tool A for this and Tool B for that and we’ll house that all in Framework C… my work is done here!

I remember sitting in a meeting a few years ago and asking “when did we get so afraid to write our own code?”

Ultimately, I’m interested in ideas… using tools and contemplating thoughts from internet blogs are useful endeavors… but they shouldn’t replace our basic creative function as engineers.

Give me innovation not regurgitation!

Drop all temp tables in current scope

Have a procedure with tons of temp tables you don’t want to keep track of?  Or debugging some nasty code with tons of temp tables?

This SQL snippet is quite handy.  It will drop all the temp tables in your current scope.

 declare @sql varchar(5000)
       @sql = isnull(@sql+';', '') + 'drop table ' + SUBSTRING(, 1, CHARINDEX('___',
 FROM tempdb..sysobjects AS t
 WHERE LIKE '#%[_][_][_]%'
 AND =
   OBJECT_ID('tempdb..' + SUBSTRING(, 1, CHARINDEX('___',;
-- print @sql
 exec (@sql)


SQL Cursors are slow… except when they’re not

SQL cursors are bad, evil, wicked things… a scourge on performance and a pox on the skills of any good SQL author.  Any good SQL man worth his salt will scoff indignantly at the mere sight of the CURSOR keyword in any production code.  It’s clear to those experts that a set-based operation will always outperform a crude cursor-based approach.

… except when it doesn’t.

Let’s say I’ve got a table constructed as follows:

create table #test (id int identity(1, 1), value money)

declare @i int = 0
while @i < 10000
 insert into #test values(100)
 set @i = @i + 1

… and then, summoning all my years of SQL skill, I compute a running total as follows:

select, sum(t2.value) as Total
from #test t1
inner join #test t2
 on <=
group by
order by

The magic is that little <= clause creating a sliding ‘window’ and yielding results like this:

id Total
1 100.00
2 200.00
3 300.00
4 400.00
5 500.00

Awesome, that totally works and I am a genius… but being the diligent SQL guru that I am, I take a little gander at the ‘ol Query Plan.

Ye Gods!

Yeah, that plan looks amazin… wait, what?!?

And I just crapped my pants… that innocent query against 10,000 rows just blew up with… wait for it… over 50 MILLION rows!

That’s your little friend known as an Eager Spool. For each row that you need a running total, sql server needs to pull that row and any row before it… over and over. So to get the first value you read one row… for the second value, you read two (for a total of three)… for the third, you read three (for a total of 6), and so on.

SQL Server helps you out here by just exploding the combinatorial results. This is really the sum of sequential integers (1+2+3=6)… a summation calculable by the formula ((N^2)+N)/2. Plug in 10,000 and you get… lo and behold… 50,005,000 rows. Exactly what our eager little buddy is showing.

You can imagine that, as the number of records goes up, your query performance will go dramatically down… and your imagination would be right. This little baby goes pear-shaped real quick.

The only solution is to calculate a result, store it, and then add the next record, store that again, and so on… in other words, don’t look back at every record before you, but rather store an accumulating result.

How can you do that? Well, you can wait for SQL Server 2012 and the new SUM() OVER (ORDER BY) capability which will do this exact optimization. This is an extension to the windowing functions added in SQL 2005. Prior to SQL 2012, however, the ORDER BY clause is only supported in non-aggregating functions.

In the meantime, you can use your old trusted friend the cursor:

set nocount on
create table #totals(id int, totals money)
declare @total money = 0, @id int, @value money
for select id, value from #test
open tCursor
fetch next from tCursor into @id, @value
while @@fetch_status = 0
	set @total = @total + @value
	insert into #totals values(@id, @total)
	fetch next from tCursor into @id, @value
close tCursor
deallocate tCursor
select * from #totals

On my development box, the original query took 19 seconds. This CURSOR version is sub-second… BAM! Genius re-established!


Where Clauses Cannot Prevent Divide-by-Zero Errors

So, your query looks something like this:

SELECT s.Quantity / t.TotalQuantity
FROM Sales s
ON t.ProductNumber = s.ProductNumber
WHERE s.SalesDate >= '1/1/2012'
AND t.TotalQuantity <> 0

Totally safe, right? That check on non-zero TotalQuality values should prevent divide-by-zero errors?

Then you run it in production against 100,00 rows and BLAM!

Msg 8134, Level 16, State 1, Line 1
Divide by zero error encountered.

Ye Gods!  How could this possibly happen?!? Thousands of years of mathematics and physics turned on its head from one simple SELECT statement!

I’ll tell you how it happens… SQL Server has to break this query into a step-by-step plan… and, in its nearly infinite wisdom, it may decide to do things in an order you did not expect.

For example, it may decide to filter out the SalesDate part first, then do all the joins, and then the division and THEN filter on the TotalQuality clause.  If  this is the plan it chooses, and you have zeroes for TotalQuantity… BLAMMO!  You just got yourself a divide-by-zero before you even got to the last filter step which would have saved you.

The ONLY way to guarantee this doesn’t happen is to use a CASE statement in your SELECT as follows. It’s ugly, but safe:

SELECT CASE WHEN t.TotalQuantity = 0 THEN 0
ELSE s.Quantity / t.TotalQuantity
FROM Sales s
ON t.ProductNumber = s.ProductNumber
WHERE s.SalesDate >= '1/1/2012'
AND t.TotalQuantity <> 0

Any time you do a division anywhere in your SQL code, you should follow this pattern or risk the wrath of the divide-by-zero gods.