Skip to content

Justin's Linklog Posts

Bogus Challenge-Response Bounces: I’ve Had Enough

I get quite a lot of spam. For one random day last month (Aug 21st), I got 48 low-scoring spam mails (between 5 and 10 points according to SpamAssassin), and 955 high-scorers (anything over 10). I don’t know how much malware I get, since my virus filter blocks them outright, instead of delivering to a folder.

That’s all well and good, because spam and viruses are now relatively easy to filter — and if I recall correctly, they were all correctly filed, no FPs or FNs (well, I’m not sure about the malware, but fingers crossed ;).

The hard part is now ‘bogus bounces’ — the bounces from ‘good’ mail systems, responding to the forged use of my addresses as the sender of malware/spam mails. There were 306 of those, that day.

Bogus bounces are hard to filter as spam, because they’re not spam — they’re ‘bad’ traffic originating from ‘good’, but misguided, email systems. They’re not malware, either. They’re a whole new category of abusive mail traffic.

I say ‘misguided’, because a well-designed mail system shouldn’t produce these. By only performing bounce rejection with a 4xx or 5xx response as part of the SMTP transaction, when the TCP/IP connection is open between the originator and the receiving MX MTA, you avoid most of the danger of ‘spamming’ a forged sender address. However, many mail systems were designed before spammers and malware writers started forging on a massive scale, and therefore haven’t fixed this yet.

I’ve been filtering these for a while using this SpamAssassin ruleset; it works reasonably well at filtering bounces in general, catching almost all of the bounces. (There is a downside, though, which is that it catches more than just bogus bounces — it also catches real bounces, those in response to mails I sent. At this stage, though, I consider that to be functionality I’m willing to lose.)

The big remaining problem is challenge-response messages.

C-R is initially attractive. If you install it, your spam load will dwindle to zero (or virtually zero) immediately — it’ll appear to be working great. What you won’t see, however, is what’s happening behind the scenes:

  • your legitimate correspondents are getting challenges, will become annoyed (or confused), and may be unwilling or unable to get themselves whitelisted;

  • spam that fakes other, innocent third party addresses as the sender, will be causing C-R challenges to be sent to innocent, uninvolved parties.

The latter is the killer. In effect, you’re creating spam, as part of your attempts to reduce your own spam load. C-R shifts the cost of spam-filtering from the recipient and their systems, to pretty much everyone else, and generates spam in the process. I’m not alone in this opinion.

That’s all just background — just establishing that we already know that C-R is abusive. But now, it’s time for the next step for me — I’ve had enough.

I initially didn’t mind the bogus-bounce C-R challenges too much, but the levels have increased. Each day, I’m now getting a good 10 or so C-R challenges in response to mails I didn’t send. Worse, these are the ones that get past the SpamAssassin ruleset I’ve written to block them, since they don’t include an easy-to-filter signature signifying that they’re C-R messages, such as Earthlink’s ‘spamblocker-challenge’ SMTP sender address or UOL‘s ‘AntiSpam UOL’ From address. There seems to be hundreds of half-assed homegrown C-R filters out there!

So now, when I get challenge-response messages in response to spam which forges one of my addresses as the ‘From’ address, and it doesn’t get blocked by the ruleset, I’m going to jump through their hoops so the spam is delivered to the C-R-protected recipient. Consider it a form of protest; creating spam, in order to keep youself spam-free, is simply not acceptable, and I’ve had enough.

And if you’re using one of these C-R filters — get a real spam filter. Sure they cost a bit of CPU time — but they work, without pestering innocent third parties in the process.

Beardy Justin

Yes, I’ve been growing a beard. Strangely, it seems to be going quite well! Here’s a good pic of beardy Justin, standing on a bridge over the Merced river in Yosemite:

Lots more pics from the holiday should be appearing here shortly, if you’re curious.

Mosquitos, Snakes and a Bear

Well, I’m back… it appears that Google Maps link I posted wasn’t too much use in deciphering where I was going; sorry about that. Myself and C spent a fun week and a bit, driving up to Kings Canyon and Yosemite, backpacking around for a few days, then driving back down via the 395 via Bishop, Mammoth Lakes, Lone Pine and so on.

Kings Canyon: Unfortunately, not so much fun; we had the bad luck of encountering what must be the tail end of the mosquito season, and spent most of our 2 days there running up and down the Woods Creek trail without a break, entirely surrounded by clouds of mozzies. Possibly this headlong dashing explains how we ran into so much other wildlife — including a (harmless) California Mountain King Snake and, less enjoyably — and despite wearing bear bells on our packs to avoid this — a black bear…

We rounded a corner on the trail, and there it was, munching on elderberries. Once we all spotted each other, there were some audible sounds of surprise from both bear and humans, and the bear ran off in the opposite direction; the humans, however, did not. We were about 500 feet from our camp for the night, so we needed to get past where the bear had been, or face a long walk back.

Despite some fear (hey, this was our first bear encounter!), we stuck around, shouted, waved things, and took the various actions you take. It all went smoothly, the bear had probably long since departed, but we took it slow regardless, and had a very jittery night in our tent afterwards. After that, and the unceasing mozzie onslaught, we were in little hurry to carry on around the planned loop, so we cut short our Kings Canyon trip by a day and just returned down the trail to its base.

Yosemite: a much more successful trip. There were many reasons, primarily that the mosquito population was much, much lower, and discovering that the Tuolumne Meadows Lodge — comfortable tent cabins, excellent food, and fantastic company — provided a truly excellent base camp.

But I’d have to say that the incredible beauty of Tuolumne Meadows and the Vogelsang Pass really blew me away. I don’t think I’ve seen any landscape quite like that, since trekking to Annapurna Base Camp in Nepal. I’m with John Muir — Yosemite and its surrounds are a wonder of the world.

Lee Vining: had to pick up a sarnie at the world-famous Whoa Nellie Deli. Yum! After all the camping, we stayed in a hotel with TV, got some washing done, and watched scenes from a J.G. Ballard novel play out on NBC and CNN. Mind-boggling.

Mammoth Lakes: A quick kvetch. Mammoth is possibly the most pedestrian-hostile town I’ve ever visited. They have a hilarious section of 100 feet of sidewalk, where I encountered a fellow pedestrian using those ski-pole-style hiking walking sticks, and entirely in seriousness. Was the concept of walking so foreign in that town that long-distance walking accessories were required? I don’t know, but it didn’t make up for the other 90% of the streets where peds were shoved off onto the shoulder, in full-on ‘sidewalk users aren’t welcome here’ Orange County style.

On top of that, the single pedestrian crossing in the main street spans five lanes of traffic, with no lighting, warning signs, or indeed any effective way for drivers to know whether peds were crossing or not. Unsurprisingly we nearly got run over when we tried using the damn thing. Best avoided.

I’m amazed — it’s like they designed the town to be ped-hostile. Surely allowing peds to get around your town is a bonus when you’re a ski resort for half of the year? Meh.

Anyway, back again, a little refreshed. Once more into the fray…

Faster string search alternative to Boyer-Moore: BloomAV

An interesting technique, from the ClamAV development list — using Bloom filters to speed up string searching. This kind of thing works well when you’ve got 1 input stream, and a multitude of simple patterns that you want to match against the stream. Bloom filters are a hashing-based technique to perform extremely fast and memory-efficient, but false-positive-prone, binary lookups.

The mailing list posting (‘Faster string search alternative to Boyer-Moore‘) gives some benchmarks from the developers’ testing, along with the core (GPL-licensed) code:

Regular signatures (28,326) :

  • Extended Boyer-Moore: 11 MB/s

  • BloomAV 1-byte: 89 MB/s

  • BloomAV 4-bytes: 122 MB/s

Some implementation details:

the (implementation) we chose is a simple bit array of (256 K * 8) bits. The filter is at first initialized to all zeros. Then, for every virus signature we load, we take the first 7 bytes, and hash it with four very fast hash functions. The corresponding four bits in the bloom filter are set to 1s.

Our intuition is that if the filter is small enough to fit in the CPU cache, we should be able to avoid memory accesses that cost around 200 CPU cycles each.

Also, in followup discussion, the following paper was mentioned: A paper describing hardware-level Bloom filters in the Snort IDS — S. Dharmapurikar, P. Krishnamurthy, T. Sproull, and J. W. Lockwood, “Deep packet inspection using parallel Bloom filters,” in Hot Interconnects, (Stanford, CA), pp. 44–51, Aug. 2003.

This system is dubbed ‘BloomAV’. Pretty cool. It’s unclear if the ClamAV developers were keen to incorporate it, though, but it does point at interesting new techniques for spam signatures.

Tech Camp Ireland

Irish techies, mark your calendars! Various Irish bloggers are proposing a Tech Camp Ireland geek get-together, similar to Bar Camp in approach, for Saturday October 15th.

Ed Byrne and James Corbett are both blogging up a storm already. I’d go, but it’d be a hell of a trip ;)

I would say it needs a little less blog, a little more code, and a little more open source, but it does look very exciting, and it’s great to see the Bar Camp spirit hitting Ireland.

More on ‘Bluetooth As a Laptop Sensor’

Bluetooth As a Laptop Sensor in Cambridge, England.

I link-blogged this yesterday, where it got picked up by Waxy, and thence to Boing Boing — where some readers are reportedly considering it doubtful. Craig also expressed some skepticism. However, I think it’s for real.

Check out the comments section of Schneier’s post — there’s a few notable points:

  • Some Bluetooth-equipped laptops will indeed wake from suspend to respond to BT signals.

  • Davi Ottenheimer reports that the current Bluetooth spec offers “always-on discoverability” as a feature. (Obviously the protocol designers let usability triumph over security on that count.)

  • Many cellphones are equipped with Bluetooth, and can therefore be used to detect other ‘discoverable’ BT devices in range.

  • Walking around a UK hotel car park, while pressing buttons on a mobile phone, would be likely to appear innocuous — I know I’ve done it myself on several occasions. ;)

Finally — this isn’t the first time the problem has been noted. The same problem was reported at Disney World, in the US:

Here’s the interesting part: every break-in in the past month (in the Disney parking lots) had involved a laptop with internal bluetooth. Apparently if you just suspend the laptop the bluetooth device will still acknowledge certain requests, allowing the thief to target only cars containing these laptops.

Mind you, perhaps this is a ‘chinese whispers’ case of the Disney World thefts being amplified. Perhaps it was noted as happening in Disney World, reported in an ’emerging threats’ forum where the Cambridgeshire cop heard it, and he then picked it up as something worth warning the public about, without knowing for sure that it was happening locally.

Update: aha. An observant commenter on Bruce Schneier’s post has hit on a possibly good reason why laptops implement wake-on-Bluetooth:

On my PowerBook, the default Bluetooth settings were “Discoverable” and “Wake-on-Bluetooth” — the latter so that a Bluetooth keyboard or mouse can wake the computer up after it has gone to sleep.

Emergent Chaos: I’m a Spamateur

Emergent Chaos: I’m a Spamateur:

In private email to Justin “SpamAssassin” Mason, I commented about blog spam and “how to fix it,” then realized that my comments were really dumb. In realizing my stupidity, I termed the word “spamateur,” which is henceforth defined as someone inexperienced enough to think that any simple solution has a hope of fixing the problem.

I think this is my new favourite spam neologism ;)

How convenient does the ‘right thing’ have to be?

Environment: Kung Fu Monkey: Hybrids and Hypotheses. A great discussion of the Toyota Prius:

Kevin Drum recently quoted a study which re-iterated that there’s no “real” advantage to buying a hybrid. It’s only just as convenient — so if you’re driving a hybrid, you’re doing it for some other reason than financial incentive.

That made me think: what a perfect example of just how fucking useless as a society we’ve become. We can’t even bring ourselves to do the right thing when it’s only JUST as convenient as doing the wrong thing. And that’s not even considered odd. Even sadder.

Box Office Patents

Forbes: Box Office Patents.

It’s the kind of plot twist that will send some critics screaming into the aisles: Why not let writers patent their screenplay ideas? The U.S. Patent and Trademark Office already approves patents for software, business methods — remember Amazon.com’s patent on ‘one-click’ Internet orders? — even role-playing games. So why not let writers patent the intricate plot of the next cyberthriller?

So in other words, a law grad called Andrew Knight actually wants to see the world RMS described in his ‘Patent Absurdity’ article for the Guardian, where Les Miserables was unpublishable due to patent infringement. Incredible.

He himself plays the classic lines, familiar to those who followed the EU software patenting debate:

Knight agrees, up to a point. He won’t reveal the exact details of the plots he’s submitted to the Patent Office, other than to say they involve cyberspace. And he says patents would apply only to ideas that are unique and complex. But he worries that without patent protection, some Hollywood sharpies could change ideas like his around and pass them off as their own.

”I’m trying to address a person who comes up with a brand-new form of entertainment who may not be a Poe, may not be a Shakespeare, but still deserves to be paid for his work,” Knight says. ”Otherwise, who will create anything?”

A perfect pro-patent hat trick!

Running on WordPress!

I’ve decided to try out the real deal — a ‘proper’ weblogging platform, namely WordPress. Be sure to comment if you spot problems…

Grumpiness and Cigarettes

Meta: My apologies if you wound up running into me online at some stage this week — I’ve been in a lousy mood.

I gave up smoking cigarettes at the end of May, and switched to patches. That went pretty well, dropping from 21mg patches, to 14mg, to 7mg. But this week I finally hit the end of the line, stopped applying a patch every morning, and became fully nicotine-free. Only, ouch — it’s not quite as easy as I thought!

Cigarette addiction is (apparently) composed of two conceptual lumps — the physical addiction to nicotine, and the mental addiction to the ‘idea’ of smoking. Through the patches, I’ve successfully nailed the mental addiction, but I’m now facing the physical withdrawal. I’m sweating, dizzy, can’t focus my eyes, can’t concentrate, my skin is going crazy, and I’m INCREDIBLY grouchy. It’s amazing how much havoc the act of withholding nicotine can cause, especially when you consider that it’s not a required nutrient for the human body — it’s an ‘optional extra’ that I never should have gone near in the first place.

Wierdly, though, I don’t want a cigarette. Instead, I want a patch ;)

Xen and UKUUG 2005

Linux: PingWales’ round-up of UKUUG Linux 2005 Day 3 includes this snippet:

As well as running (Virtual Machines), Xen allows them to be migrated on the fly. If a physical system is overloaded, or showing signs of failure, a virtual machine can be migrated to a spare node. This process takes time, but causes very little interruption to service. The machine state is first copied in its entirety, then the changes are copied repeatedly until there are a small enough number than the machine can be stopped, the remaining changes copied and the new version started. This usually provides a service interruption of under 100ms – a small enough jitter that people playing Quake 3 on a server in a virtual machine did not notice when it was moved to a different node.

Now that is cool.

Jim Winstead’s A9 on foot

Images: Jim Winstead’s walk up Broadway from a few days ago has already garnered a few interested parties, since he’s Creative-Commons-licensed all the photos, and they’re easily findable via Google and on Flickr.

I find this interesting; the collision between open source, photography and cartography is cool. The result is a version of maps.A9.com, where you can actually use the images legally in your own work. More people should do this for other cities.

Where the ‘cursor’ came from

Stuff: So C is a massive antiques nut, and got tickets for the Antiques Roadshow next month in LA. As a result, we’ve been shopping around for interesting stuff for her to bring along.

Here’s what I found at the antiques market last weekend:

Click on the pic to check out my multiplication skills!

The Life of a SpamAssassin Rule

Spam: during a recent discussion on the SpamAssassin dev list, the question came up as to how long a rule could expect to maintain its effectiveness once it was public — the rule secrecy issue.

In order to make a point — that certain types of very successful rules can indeed last a long time — I picked out one rule, MIME_BOUND_DD_DIGITS. Here’s a smartened-up copy of what I found out.

This rule matches a certain format of MIME boundary, one observed in 17.4637% of our spam collection and with 0 nonspam hits. Since we have a massive collection of mails, received between Jan 2004 to May 2005, and a rule with a known history, we can then graph its effectiveness over time.

The rule’s history was:

  • bug 3396: the initial contribution from Bob Menschel, May 15 2004
  • r10692: arrived in SVN: May 16 2004
  • r20178: promoted to ‘MIME_BOUND_DD_DIGITS’: May 20 2004 (funnily enough, with a note speculating about its lifetime from felicity!)
  • released in the SpamAssassin 3.0.0 release: mid-Sep 2004

So, we would expect to see a drop in its effectiveness against spam in late May 2004 and onwards, if the spammers were reacting to SVN changes; or post September 2004, if they react to what’s released.

By graphing the number of hits on mails within each 2-hour window, we can get a good idea of its effectiveness over time:

The red bars are total spam mails in each time period; green bars, the number of spam mails that hit the rule in each period. May 15 2004 and Sep 20 2004 are marked; Jan 2004 is at the left, and May 2005 is at the right-most extreme of the graph. (There’s a massive spike in spam volume at the right — I think this is Sober.Q output, which disappears after a week or so.)

It appears that the rule remains about even in effectiveness in the 4 months it’s in SVN, but unreleased; it declines a little more after it makes it into a SpamAssassin release. However, it trails off very slowly — even in May 2005, it’s still hitting a good portion of spam.

Given this, I suspect that most spammers are not changing structural aspects of their spam in response to SpamAssassin with any particular alacrity, or at least are not capable of doing so.

To speculate on the latter, I think many spammers are using pirated copies of the spamware apps, so cannot get their hands on updated versions through ‘legitimate’ channels.

Speculating on the former — in my opinion there’s a very good chance that SpamAssassin just isn’t a particular big target for them to evade, compared to the juicy pool of gullible targets behind AOL’s filters, for example. ;)

‘Irish EFF’

Ireland: There’s been some discussion about ‘an Irish EFF’ recently, reminding me of the old days of Electronic Frontier Ireland in the 1990s.

I was reminded of this by Danny O’Brien’s article in The Guardian, where he notes an interesting point — half of the effectiveness of the EFF in the US, is because they have a few full-time people sitting in an office, answering phone calls. Essentially they act as a human PBX, being the go-to guy connecting journalists to activists and experts.

Now that is something that could really work, and is needed in Ireland, which is in the same boat as the UK in this respect; the journalists don’t know who to ask for a reliable opposing opinion when the BSA, ICT Ireland, or the IRMA put out incorrect statements. It has to be someone who’s always available for a quote at the drop of a hat, over the phone. From experience, this takes dedication — and without getting paid for it, it’s hard to keep the motivation going.

IrelandOffline have done it pretty well for the telecoms issue; ICTE have done a brilliant job, the best I’ve seen in Europe IMO, of grabbing hold of the e-voting issue to the stage where they own it; but for online privacy, software patenting, and other high-tech-meets-society issues, there’s nobody doing it that successfully.

(Update: added ICTE, slipped my mind! Sorry Colm!)

Happy Birthday to the RISKS Forum!

Tech: One of the first online periodicals I started reading regularly, when I first got access to USENET back in 1989 or so, was comp.risks — Peter G. Neumann’s RISKS Forum. Since then, I’ve been reading it religiously, in various formats over the years.

It appears that RISKS has just celebrated its 20th anniversary.

Every couple of weeks it provides a hefty dose of computing reality to counter the dreams of architecture astronauts and the more tech-worshipping members of our society, who fail to realise that just because something uses high technology, doesn’t necessarily make it safer.

I got to meet PGN a couple of weeks ago at CEAS, and I was happy to be able to give my thanks — RISKS has been very influential on my code and my outlook on computing and technology.

Nowadays, with remote code execution exploits for e-voting machines floating about, and National Cyber-Security Czars, I’d say RISKS is needed more than ever. Long may it continue!

Stupid ‘Ph’ Neologisms Considered Harmful

Words: ‘Pharming’. I recently came across this line in a discussion document:

‘Wait, isn’t this exactly the kind of attack pharmers mount?’

I was under the impression that ‘pharming’ was a transgenics term: ‘In pharming, … genetically modified (transgenic) animals are
mostly used to make human proteins that have medicinal value. The protein encoded by the transgene is secreted into the animal’s milk, eggs or blood, and then collected and purified. Livestock such as cattle, sheep, goats, chickens, rabbits and pigs have already been modified in this way to produce several useful proteins and drugs.’

Obviously this wasn’t what was being referred to. So I got googling. It appears the sales and marketing community of various security/filtering/etc. companies, have been getting all het up about various phishing-related dangers.

The earliest article I could find was this — GCN: Is a new ID theft scam in the wings? (2005-01-14):

”Pharming is a next-generation phishing attack,’ said Scott Chasin, CTO of MX Logic. ‘Pharming is a malicious Web redirect,’ in which a person trying to reach a legitimate commercial site is sent to the phony site without his knowledge. ‘We don’t have any hard evidence that pharming is happening yet,’ Chasin said. ‘What we do know is that all the ingredients to make it happen are in place.’

Oooh scary! The article is short on technical detail (but long on scary), but I think he’s talking about DNS cache poisoning, whereby an attacker implants incorrect data in the victim’s DNS cache, to cause them to visit the wrong IP address when they resolve a name. This Wired article (2005-03-14) seems to confirm this.

But wait! Another meaning is offered by Green Armor Solutions, who use the term to talk about the Panix and Hushmail domain hijacks, where an attacker social-engineered domain transfers from their registrars. There’s no date on the page, but it appears to be post-March 2005.

Finally, yet another meaning is offered in this article at CSO Online: How Can We Stop Phishing and Pharming Scams? (May 2005): ‘The Computing Technology Industry Association has reported that pharming occurrences are up for the third straight year.’ What?! Call Scott Chasin!

Steady on — it appears that the ‘pharming’ CSO Online is talking about, has devolved to the stage where it’s simply a pop-up window that attempts to emulate a legit site’s input — no DNS trickery involved. (This trick has, indeed, been used in phish for years.)

So right there we have three different meanings for ‘pharming’, or four if you count the biotech one.

It may be impossible to get the marketeers to stop referring to ‘pharming’. But please, if you’re a techie, don’t use that term, it’s lack of clarity renders it useless. Anyway, the biotech people were there first, by several years…

Stunning round-up of alleged election fraud in Ohio

Voting: None Dare Call It Stolen – Ohio, the Election, and America’s Servile Press, by Mark Crispin Miller.

Miller and many others have obviously been spending a lot of work chasing down each incident in Ohio since last November, and there’s quite a lot of them. It’s impressive the degree to which recounts were evaded, if these allegations are true. There’s many shocking cases alleged than I could really fit here — but here’s some of the lowest points:

On December 13, 2004, it was reported by Deputy Director of Hocking County Elections Sherole Eaton, that a Triad GSI employee had changed the computer that operated the tabulating machine, and had “advised election officials how to manipulate voting machinery to ensure that preliminary hand recount matched the machine count.” This same Triad employee said he worked on machines in Lorain, Muskingum, Clark, Harrison, and Guernsey counties.

it strongly appears that Triad and its employees engaged in a course of behavior to provide “cheat sheets” to those counting the ballots. The cheat sheets told them how many votes they should find for each candidate, and how many over and under votes they should calculate to match the machine count. In that way, they could avoid doing a full county-wide hand recount mandated by state law.

In Union County, Triad replaced the hard drive on one tabulator. In Monroe County, “after the 3 percent hand count had twice failed to match the machine count, a Triad employee brought in a new machine and took away the old one. (That machine’s count matched the hand count.)”

The willingness to throw away functioning, reliable election systems, and replacing them with new, easy-to-subvert ones, is astounding. But on top of that, when concerned parties investigate and find danger signs, it’s easily buried:

Miller emphasizes that, even after the National Election Data Archive Project, on March 31, 2005, “released its study demonstrating that the exit polls had probably been right, it made news only in the Akron Beacon-Journal,” while “the thesis that the exit polls were flawed had been reported by the Associated Press, the Washington Post, the Chicago Tribune, USA Today, the San Francisco Chronicle, the Columbus Dispatch, CNN.com, MSNBC, and ABC.”

Miller’s conclusion: ‘the press has unilaterally disarmed’.

SpikeSource, Open Source, and Bongo

Open Source: so I was just looking at OSCON 2005‘s website, and I noticed that it listed Kim Polese, of SpikeSource, as a presenter.

I don’t really pay any attention to what’s happening in Java these days, but it appears that SpikeSource launched last year to provide ‘enterprise support services for open-source software’ with a Java/enterprise slant.

Funnily enough, my last encounter with a Kim-Polese-headed company did indeed have a big effect on me, open-source-wise.

That company was Marimba, and they made an excellent Java GUI builder called Bongo. In those days (nearly ten years ago!), I was working on a product for Iona as a developer, in Java and C++, and we needed to provide a GUI on a number of Java tools. I chose to use Bongo, as it had a great feature set and looked reliable.

Wow, was I wrong! The software was reliable — sadly, the same couldn’t be said about the vendor. What I hadn’t considered was the possibility that the company might decide to discontinue the product, and not offer any migration help to its customers — and that’s exactly what happened, Sometime around 1998, Marimba decided that Bongo wasn’t quite as important as their Castanet ‘push’ product, and dropped it. Despite calls from the Bongo-using community to release the code so that the community could maintain it and avoid code-rot, they never did, and as a result apps using Bongo had to be laboriously rewritten to remove the Bongo dependencies.

I learned an important lesson about writing software — if at all possible, build your products on open source, instead of relying on a fickle commercial software vendor. It’s a lot harder to have the rug pulled out from under you, that way.

Update: Well, it seems it was quite far off the mark about Marimba. Someone who worked at Marimba at the time read the blog entry, and got in touch via email:

I was an employee of Marimba in the early days, and was around when we developed Bongo, and still later, when we discontinued it, and still later, when Bongo *was* released to the open-source community (jm: appears to be around the start of 1999 I think). It was hosted on a site called freebongo.org and continued to be enhanced with new features and a lot of new and cool widgets. It was ultimately discontinued a few years later due to lack of interest.

It was hosted and primarily maintained in the open-source community by one of the original Bongo engineers. Here’s a link from the Java Gazette from the days when it was called Free Bongo.

So don’t go blaming Marimba. We did listen to our users and release the code!

Fair enough — and they deserve a lot more credit than I’d initially assumed. I guess I must have missed this later development after leaving Iona. Apologies, ex-Marimbans!

Patents and Laches

Patents: This has come up twice recently in discussions of software patenting, so it’s worth posting a blog entry as a note.

There’s a common misconception that a patenter does not necessarily need to enforce a patent in the courts, for it to remain valid. This isn’t true in the US at least, where there is the legal doctrine of ‘laches’, defined as follows in the Law.com dictionary:

Laches – the legal doctrine that a legal right or claim will not be enforced or allowed if a long delay in asserting the right or claim has prejudiced the adverse party (hurt the opponent) as a sort of ‘legal ambush’.

The Bohan Mathers law firm have a good paragraph explaining this:

…the patent holder has an obligation to protect and defend the rights granted under patent law. Just as permitting the public to freely cross one’s property may lead to the permanent establishment of a public right of way and the diminishment of one’s property rights, so the knowing failure to enforce one’s patent rights (one legal term for this is laches) against infringement by others may result in the forfeiture of some or all of the rights granted in a particular patent.

See also this and this page for discussion of cases where it was relevant. It seems by no means clear-cut, but the doctrine is there.

CEAS

Spam: back from CEAS. The schedule with links to full papers is up, so anyone can go along and check ’em out, if you’re curious.

Overall, it was pretty good — not as good as last year’s, but still pretty worthwhile. I didn’t find any of the talks to be quite up to the standards of last year’s TCP damping or Chung-Kwei papers; but the ‘hallway track’ was unbeatable ;)

Here’s my notes:

AOL’s introductory talk had some good figures; a Pew study reported that 41% of people check email first thing in morning, 40% have checked in the middle of the night, and 26% don’t go more than 2-3 days without checking mail. It also noted that URLs spimmed (spammed via IM) are not the same as URLs spammed — but the obfuscation techniques are the same; and they’re using 2 learning databases, per-user and global, and the ‘Report as Spam’ button feeds both.

Experiences with Greylisting: John Levine’s talk had some useful data — there are still senders that treat a 4xx SMTP response (temp fail) as 5xx (permanent fail), particularly after end of the DATA phase of the transaction, such as an ‘old version of Lotus Notes’; and there are some legit senders, such as Kodak’s mail-out systems, which regenerate the body in full on each send, even after a temp fail, so the body will look different. He found that less than 4% of real mail from real MTAs is delayed, and overall, 17% of his mail traffic was temp-failed. The 4% of nonspam that was delayed was delayed with peaks at 400 and 900 seconds between first tempfail and eventual delivery.

As usual, there were a variety of ‘antispam via social networks’ talks — there always are. Richard Clayton had a great point about all that: paraphrasing, I trust my friends and relatives on some things, and they are in my social networks — but I don’t trust their judgement of what is and is not spam. (If you’ve ever talked to your mother about how she always considers mails from Amazon to be spam, you’ll know what he means.)

Combating Spam through Legislation: A Comparative Analysis of US and European Approaches:
the EU ‘opt-in’ directive is now transposed everywhere in the EU; EU citizens who are spammed by a citizen from another EU country, the reports should be sent to the antispam authority in the sender’s country; and there’s something called ‘ECNSA’, an EU contact network of spam authorities, which sounds interesting (although ungoogleable).

Searching For John Doe: Finding Spammers and Phishers: MS’ antispam attorney, Aaron Kornblum, had a good talk discussing their recent court cases. Notably, he found one cases where an Austrian domain owner had set up a redirector site which sounded like it was expressly set up for spam use — news to me (and worrying).

A Game Theoretic Model of Spam E-Mailing: Ion Androutsopoulos gave a very interesting talk on a game theoretic approach to anti-spam — it was a little too complex for the time allotted, but I’d say the paper is worth a read.

Understanding How Spammers Steal Your E-Mail Address: An Analysis of the First Six Months of Data from Project Honey Pot: Matthew Prince of Project Honeypot had some excellent data in this talk; recommended. He’s found that there’s an exponential relationship between google Page Rank and spam received at scraped addresses, which matches with my theory of how scrapers work; and that only 3.2% of address-harvesting IPs are in proxy/zombie lists compared to 14% of spam SMTP delivery IPs. (BTW, my theory is that address scraping generally uses Google search results as a seed, which explains the former.)

Computers beat Humans at Single Character Recognition in Reading based Human Interaction Proofs (HIPs): this presented some great demonstrations of how a neural network can be used to solve HIPs (aka CAPTCHAs) automatically. However, I’m unsure how useful this data is, given that the NN required 90000 training characters to achieve the accuracy levels noted in the paper; unless the attacker has access to their own copy of the HIP implementation they can run themselves, they’d have to spend months performing HIPs to train it, before an attack is viable.

Throttling Outgoing SPAM for Webmail Services: cites Goodman in ACM E-Commerce 2004 as saying that ESP webmail services are a ‘substantial source of spam’, which was news to me! (less than 1% of spam corpora, I’d guess). It then discusses requiring the submitter of email via an ESP webmail system to perform a hashcash-style proof-of-work before their message is delivered. By using a Bayesian spam filter to classify submitted messages, the ESP can cause spammers to perform more work than non-spammers, thereby reducing their throughput. Didn’t strike me as particularly useful — Yahoo!’s Miles Libbey got right to the heart of the matter, asking if they’d considered a situation where spammers have access to more than one computer; they had not. A better paper for this situation would be Alan Judge’s USENIX LISA 2003 one which discusses more industry-standard rate-limiting techniques.

SMTP Path Analysis: IBM Research’s anti-spam team discuss something very similar to several techniques used in SpamAssassin; our versions have been around for a while, such as the auto-whitelist (which tracks the submitter’s IP address rounded to the nearest /16 boundary), since 2001 or 2002, and the Bayes tweaks we added from bug 2384, back in 2003.

Naive Bayes Spam Filtering Using Word-Position-Based Attributes: an interesting tweak to Bayesian classification using a ‘distance from start’ metric for the tokens in a message. Worth trying out for Bayesian-style filters, I think.

Good Word Attacks on Statistical Spam Filters: not so exciting. A bit of a rehash of several other papers — jgc’s talk at the MIT conference on attacking a Bayesian-style spam filter, the previous year’s CEAS paper on using a selection of good words from the SpamBayes guys, and it entirely missed something we found in our own tech report — that effective attacks will result in poisoned training data, with a significant bias towards false positives. In my opinion, the latter is a big issue that needs more investigation.

Stopping Outgoing Spam by Examining Incoming Server Logs: Richard Clayton’s talk. Well worth a read. It’s an interesting technique for ISPs — detecting outgoing spam by monitoring hits to your MX from your own dialup pools which uses known ratware patterns.

Anonymous remailers being tampered with

Politics: EDRI-gram notes that the Firenze Linux User Group’s server was tampered with last month at its ISP colo:

On Monday 27 June 2005, two members of FLUG (Firenze Linux User Group) visited the data centre of Dada S.p.a., in Milan, where the community server of the group is physically housed, in order to move it to another provider.

When the server was put out of the rack, however, it was discovered that the upper lid of the server case was half-opened. At a closer inspection, it was also discovered that the case lid was scratched, as if it had been put out and reinserted into the rack. Worse, the CD-ROM cable was missing, as were the screws that kept the hard disks in place.

What is particularly worrying is that the server hosted an anonymous remailer, whose keys and anonymity capabilities could have been compromised. Considering what happened to Autistici/Inventati server – which hosted another anonymous remailer – this possibility is not so far fetched. This begs the question whether a co-ordinated attempt at intercepting anonymous/private communications on the Internet has been ongoing in the past weeks and months.

Bizarre goings-on.

looking at the new DKIM draft

The combined DKIM standard, mixing Yahoo!’s DomainKeys and Cisco’s IIM, has been submitted to the IETF as a candidate spec by the MASS ‘pre-working group effort’. I like the idea behind both (a few years back, I, a few other SpamAssassin developers, and several others came up with the roots of a message-signature anti-forgery scheme we called ‘porkhash’, but never really went anywhere with it), so I’m glad to see this one progressing nicely.

Seeing as I never seem to write much about anti-spam here any more, I might as well remedy that now with some comments on the new DKIM draft. ;)

It’s a very good synthesis of the two previous drafts, DomainKeys and IIM, more DK-ish, but taking the nice features from IIM.

The ‘h=’ tag is now listed as REQUIRED. This specifies the list of headers that are to be signed. If I recall correctly, this was added in IIM, modifies the behaviour of DK, and is a good feature — it protects against in-transit corruption by, (a) specifying an order of the headers, to protect against MTAs that reorder them; and (b) allowing sites to protect the ‘important’ headers (From, To, Subject etc.) and ignore possible additions by MTAs down the line (scanner additions, mailing list munging and additions, and so on).

A list of recommended headers to sign is included, with From as a MUST and Subject, Date, Content-Type and Content-Transfer-Encoding as a SHOULD.

Forwarding is, of course, just fine. This one doesn’t suffer from the SPF failure mode, whereby a forwarder will break a signature if it doesn’t rewrite the SMTP MAIL FROM sender address. (Of course, it now has its own new failure modes — the message must be forwarded in a nearly-pristine state.)

The message length to sign can be specified with ‘l=’. This may be useful to protect against the issue where mailing list managers add a footer to a signed message. It recommends that verifiers remove text after the ‘l’ length, if it appears, since that offers a way for spammers to reuse existing signatures. I still have to think about this, but I suspect SpamAssassin could give points for additional text beyond the ‘l=’ point that doesn’t match mailing list footer profiles.

The IIM HTTP-based public-key infrastructure is gone; it’s all DNS, as it was in DK.

The ‘z=’ field, which contains copies of the original headers, is a great feature for filters — we can now pragmatically detect ‘acceptable’ header rewriting if necessary, and handle recovery at the receiver end.

Multiple signatures, unfortunately, couldn’t be supported. I can see why, though, it’s a very hard problem.

The ‘Security Considerations’ section is excellent — 9.1.2 uses a very clever HTML attack.

Looks like development of DKIM-Milter, and an associated library, libdkim, are underway.

Given all that, it looks good. It’s not clear how much we can do with DK, and now DKIM, in SpamAssassin, however — it’s very important in these schemes that the message be entirely unmunged, and in most SpamAssassin installs, the filter doesn’t get to see the message until after the delivering MTA, or the MDA (Message Delivery Agent), has performed some rewriting. This would cause FPs if we’re not very, very careful.

I hope though, that we can find a useful way to trust DKIM results. It appears likely that they’d make an excellent way to provide trustworthy whitelisting — ‘whitelist_from_dkim’ rules, similarly to our new whitelist_from_spf support. (In fact, we could probably just merge both into some new ‘whitelist_from_authenticated’ setting.)

OpenWRT vs Netgear MR814: no contest

Hardware: After a few weeks running OpenWRT on a Linksys WRT54G, here’s a status report.

Things that the new WRT54G running OpenWRT does a whole lot better than the Netgear MR814:

  • Baseline: obviously it doesn’t DDoS the University of Wisconsin, and it doesn’t lose the internet connection regularly, as noted in that prior post. I knew that, so those are not really new wins, though.
  • It’s quite noticeably faster. I’ve seen it spike to double the old throughput rates, and it’s solid, too; less deviation in those rates.
  • It doesn’t break my work VPN. I wasn’t sure if it was the MR814 that was doing this, requiring an average of about 20 reconnects per day — now, I know it for a fact. I’ve had to reconnect my VPN connection about 4 times over the past week.
  • It doesn’t break the Gigafast UIC-741 USB wifi dongle I’m using on the MythTV box. Previously that would periodically disappear from the HAN. Again, I had this pegged as an issue with the driver for that; removing the MR814 from the equation has solved it, too, and it’s now running with 100% uptime so far.
  • It does traffic shaping with Wondershaper, so I can use interactive SSH, VNC, or remote desktop while downloading, even if it’s another machine on the HAN doing the download.
  • It’s running linux — ssh’ing in, using ifconfig, and vi’ing shell scripts on my router is very, very nice.

Man, that MR814 was a piece of crud. ;) I can’t recommend OpenWRT enough…

EU software patents directive is history

Patents: A great outcome! The proposed Directive has been dropped, in the face of massive opposition. Coverage: /., FFII, FT.com, VNUnet, FSFE.

Unfortunately, Rocard’s proposed amendments which would have turned this directive into a major win for us, didn’t pass — but it’s still a good win. Software patents are not explicitly legal throughout Europe; although some jurisdictions do permit them, they’re in a legal grey area, and prosecution is therefore hard (and very expensive for patent holders). This is a much better situation than if the directive as proposed by the Council had passed, since that would have explicitly legalised them throughout the EU.

On top of this win, what I find significant is that we’ve now brought the issue from where it was a few years ago, as a minor concern known only to a few uber-geeks, to a major political issue that made headlines around the globe. Even my local NPR affiliate reported on this decision! That’s a far cry from the mid-90’s, when I had a hard time explaining the point of theLeague for Programming Freedom to my hacker friends in the TCD Maths Department.

A great quote from the VNUnet article:

‘This represents a clear victory for open source,’ said Simon Phipps, chief open source officer at Sun Microsystems. ‘It expresses Parliament’s clear desire to provide a balanced competitive market for software.’

Yes, that’s Sun saying that less software patenting is a good thing. Believe me, that’s a great leap forward. Or check out Irish MEP Kathy Sinnott’s amazing comments. She hits the nail right on the head; I’m very impressed by that speech.

McCreevy seeing anti-globalisation protesters everywhere

Patents: I’m just back from a fantastic holiday weekend, totally offline, hiking through Catalina Island. I’m a little bit sunburnt, my nose is peeling, but it was great fun. I got a fantastic picture of the sun setting over hundreds of boats bobbing at their moorings in Two Harbors, which I must upload at some stage.

Anyway, it seems that over the weekend, the EU software-patents debate has swung back heavily towards the anti-swpat side. Fingers crossed — the vote is this week.

Also, today, EUpolitix.com has an interview with Charlie McCreevy, quoting him as saying:

‘The theme, or the background music, to both of these particular directives (the CII and Services Directives) you could see as part of, anti-globalisation, anti-Americanism, anti-big business protests — in lots of senses, anti-the opening up of markets’

This is standard practice for the Irish government — they did exactly the same thing with the e-voting issue, painting the ICTE as ‘linked to the anti-globalisation movement’. (I have a feeling they think that any group organised online must be ‘anti-globalisation’, at this stage.)

Of course, with these accusations of being anti-free-market, it’s important to remember that a patent is a government-issued monopoly on an invention (or in the software field, on an idea), in a particular local jurisdiction. If anything, being against software patenting is a pro-free-market position, one shared by prominent US libertarians; and nothing gets more pro-free-market than those guys. ;)

CEAS coming up soon…

Spam: if you work in anti-spam, especially in filtering, or even just in working with email in general, it’s well worth going to CEAS 2005, the Conference on Email and Anti-Spam, on Thursday July 21st and Friday 22nd in Stanford:

The organizers of the Conference on Email and Anti-Spam invite you to participate in its second annual meeting. This forum brings together academic and industrial researchers to present new work in all aspects of email, messaging and spam — with papers this year covering fields as diverse as text classification, clustering and visualization of email, social network analysis applied to both email and spam, spam filtering methods including text classification and systems approaches, game theory, data analysis, Human Interactive Proofs, and legal studies, among others. The conference will feature 26 paper presentations, a banquet, and two invited speakers. See http://www.ceas.cc for details of the current program, as well as on-line registration.

Registration runs out on July 10th.

I went last year, and it was excellent — several very interesting papers were presented. I’m going this year, too, along with quite a few SpamAssassin committers, and I’m looking forward to it.

Hackability as a selling point

Hardware: On my home network, I recently replaced my NetGear MR814 with a brand new Linksys WRT54G.

My top criteria for what hardware to buy for this job weren’t price, form factor, how pretty the hardware is, or even what features it had — instead, I bought it because it’s an extremely hackable router/NAT/AP platform. Thanks to a few dedicated reverse engineers, the WRT hardware can now be easily reflashed with a wide variety of alternative firmware distributions, including OpenWRT, a fully open-source distro that offers no UI beyond a command-line.

Initially, I considered a few prettier UIs — HyperWRT, for example — since I didn’t want to have to spend days hacking on my router, of all things, looking stuff up in manuals, HOWTOs and in Google. Finally I decided to give OpenWRT a spin first. I’m glad I did — it turned out to be a great decision.

(There was one setup glitch btw — by default, OpenWRT defaults to setting up WPA, but the documentation claims that the default is still no crypto, as it was previously.)

The flexibility is amazing; I can log in over SSH and run the iftop tool to see what’s going on on the network, which internal IPs are using how much bandwidth, how much bandwidth I’m really seeing going out the pipe, and get all sorts of low-level facts out of the device that I’d never see otherwise. I could even run a range of small servers directly on the router, if I wanted.

Bonus: it’s rock solid. My NetGear device had a tendency to hang frequently, requiring a power cycle to fix; this bug has been going on for nearly a year and a half without a fix from NetGear, who had long since moved on to the next rev of cheapo home equipment and weren’t really bothering to support the MR814. I know this is cheap home equipment — which is why I was still muddling along with it — but that’s just ridiculous. None of that crap with the (similarly low-cost) WRT. OpenWRT also doesn’t contain code to DDOS NTP servers at the University of Wisconsin, which is a bonus, too. ;)

Sadly, I don’t think Cisco/Linksys realise how this hackability is making their market for them. They’ve been plugging the security holes used to gain access to reflash the firmware in recent revisions of the product (amazingly, you have to launch a remote command execution attack through an insecure CGI script!), turning off the ability to boot via TFTP, and gradually removing the ways to reflash the hardware. If they succeed, it appears the hackability market will have to find another low-cost router manufacturer to give our money to. (update, June 2006: they since split the product line into a reflashable Linux-based “L” model and a less hackable “S” model, so it appears they get this 100%. great!)

Given that, it’s interesting to read this interview with Jack Kelliher of pcHDTV, a company making HDTV video capture cards:

Our market isn’t really the mass market. We were always targeting early adopters: videophiles, hobbyists, and students. Those groups already use Linux, and those are our customers.

Matthew Gast: The sort of people who buy Linksys APs to hack on the firmware?

Jack Kelliher: Exactly. The funny thing is that we completely underestimated the size of the market. When we were starting up the company, we went to the local Linux LUG and found out how many people were interested in video capture. Only about 2 percent were interested in video on Linux, so we thought we could sell 2,000 cards. (Laughs.) We’ve moved way beyond that!

Well worth a read. There’s some good stuff about ulterior motives for video card manufacturers to build MPEG decoding into their hardware, too:

The broadcast flag rules are conceptually simple. After the digital signal is demodulated, the video stream must be encrypted before it goes across a user accessible bus. User accessible is defined in an interesting way. Essentially, it’s any bus that a competent user with a soldering iron can get the data from. Video streams can only be decrypted right before the MPEG decode and playback to the monitor.

To support the broadcast flag, the video capture must have an encryptor, and the display card must have a decryptor. Because you can’t send the video stream across a user accessible bus, the display card needs to be a full MPEG decoder as well, so that unencrypted video never has to leave the card.

Matthew Gast: So the MPEG acceleration in most new video cards really isn’t really for my benefit? Is it to help the vendors comply with the broadcast flag?

Jack Kelliher: Not quite yet. Most video cards don’t have a full decoder, so they can’t really implement the broadcast flag. ATI and nVidia don’t have full decoders yet. They depend on some software support from the operating system, so they can’t really implement the broadcast flag. Via has a chipset with a full decoder, so it would be relatively easy for them to build the broadcast flag into that chipset.

Aha.

Project management, deadlines etc.

Work: I took a look over at Edd Dumbill‘s weblog recently, and came across this posting on planning programming projects. He links to another article and mentions:

My recent return to managing a team of people has highlighted for me the difficulties of the arbitrary deadline approach to project management. Unfortunately, it’s also the default management approach applied by a lot of people, because the concept is easy to grasp.

The arbitrary deadline method is troublesome because of the difficulty of estimation. As John’s post elaborates, you can never foresee all of the problems you’ll meet along the way. The distressing inevitability of 90% of the effort being required by 2% of the deliverable is frequently inexplicable to developers themselves. Never mind the managers remote from the development!

I’ve been considering why my experience of working with open source seems generally preferable to commercial work, and this may be one of the key elements. Commercial software development is deadline-driven, whereas most open source development has not been, in my experience; ‘it’s ready when it’s ready’.

Edd suggests that using a trouble-ticket-based system for progress tracking and management is superior. I’m inclined to agree.

Irish SME associations quiet on patenting

Patents: yes, I keep rattling on about this — the vote is coming up on July 6th. I promise I’ll shut up after that ;)

UEAPME has issued a statement regarding the directive which is strongly critical of its current wording (UEAPME is the European small and medium-sized business trade association, comprising 11 million SMEs). Quote:

‘The failure to clearly remove software from the scope of the directive is a setback for small businesses throughout Europe. UEAPME is now calling on the European Parliament to reverse yesterday’s decision at plenary session next month and send a strong message that an EU software patent is not an option,’ Hans-Werner Müller, UEAPME Secretary General, stated.

‘There is growing agreement among all actors that software should not be patented, so providing an unequivocal definition in the directive that guarantees this is clearly in the general interest. We are calling on the Parliament to support the amendments that would ensure this,’ said Mr Müller.

‘The cacophony of misinformation and misleading spin from the large industry lobby in the run up to this vote has obscured the general consensus on preventing the patenting of pure software.’

That’s all well and good. So presumably the Irish members of UEAPME, ISME and the SFA, are agreeing, right? Sadly, neither of these have issued any press releases on the subject, as far as I can see, and approaches by members of IFSO have been totally fruitless.

Since both have made recent press noting that Irish small businesses face difficulties with the rising costs of doing business, this would seem to be a no-brainer — legalising software patents would immediately open Irish SMEs up to the costs associated with them: licensing fees, fighting spurious infringement litigation from ‘patent troll’ companies, the ‘chilling effects’ on investors noted by Laura Creighton, and of course the high price of retaining patent lawyers to file patents on your own innovations. One wonders why they aren’t concerned about these costs…

Happy Midwinter’s Day!

Antarctic: Happy Midwinter’s Day!

I’ve just finished reading Big Dead Place , Nicholas Johnson’s book about life at McMurdo Base and the US South Pole Station, with anecdotes from his time there in the early years of this decade.

It’s a fantastic book — very illustrative of how life really goes on on a distant research base, once you get beyond romantic notions of exploration of the wild frontiers. (Like many geek kids, I spent my childhood dreaming of space exploration, and Antarctica is the nearest thing you can get to that right now.) A bonus: it’s hilarious, too.

Unfortunately it’s far from all good — as one review notes, it’s like ‘M*A*S*H on ice, a bleak, black comedy.’ There’s story after story of moronic bureaucratic edicts emailed from comparatively-sub-tropical Denver, Colorado, ass-covering emails from management on a massive scale, and injuries and asbestos exposures covered up to avoid spoiling ‘metrics’.

Here’s a sample of such absurdity, from an interview with Norwegian world-record breaking Antarctic explorer, Eirik Sønneland:

BDP: I was working at McMurdo when you arrived in 2001. I remember it well because we were commanded by NSF not to accommodate you in any way, and were forbidden to invite you to our rooms or into any buildings. We were told not to send mail for you, nor to send email messages for you. While you were in the area, NSF was keeping a close eye on you. What did the managers say to you when you arrived?

They asked us what plans we had for getting home. The manager at Scott Base (jm: the New Zealand base) was calm and listened to what we had to say. I must be honest and say that this was not the way we were treated by the U.S. manager. It was like an interrogation. Very unpleasant. He acted arrogant. However, it seemed like he started to realize after a couple of days that we didn’t try to fool anybody. He probably got his orders from people that were not in Antarctica at the time. And, to be honest, today I don’t have bad feelings toward anyone in McMurdo. Bottom line, what did hurt us was that people could not think without using bureaucracy. If people could only try to listen to what we said and stop looking up paragraphs in some kind of standard operating procedures for a short while, a lot could have been solved in a shorter time.

One example: our home office, together with Steven McLachlan and Klaus Pettersen in New Zealand, got a green light from the captain of the cargo ship that would deliver cargo (beer, etc.) to McMurdo, who said he would let us travel for free back to New Zealand if it was okay with his company. At first the company was agreeable, but then NSF told them that the ship would be under their rent until it left McMurdo and was 27 km away. Reason for the 27 km? The cargo ship needed support from the Coast Guard icebreaker to get through the ice. Since, technically, the contract with NSF did not cease until the ship left the ice, NSF could stop us from going on the ship. At which point NSF offered to fly us from McMurdo for US$50,000 each.

He also maintains an excellent website at BigDeadPlace.com, so go there for an idea of the writing. BTW, it appears the UK also maintains an Antarctic base. Here’s hoping they keep the bureaucracy at a saner level over there.

The meaning of the term ‘technical’ in software patenting

Patents: One of the key arguments in favour of the new EU software patenting directive as it’s currently worded, from the ‘pro’ side, is that it doesn’t ‘allow software patents as such’, since it requires a ‘technical’ inventive step for a patent to be considered valid.

Various MEPs have tried to clarify the meaning of this vague phrase, but without luck so far.

Coverage has mostly noted this as meaning that ‘pure software’ patents are not permissible, for example this Washington Post article, FT.com,and InformationWeek.

But is this really the case, in pragmatic terms? What does a ‘technical inventive step’ mean to the European Patent Office?

Well, it doesn’t look at all promising, according to this report from the Boards of Appeal of the European Patent Office from 21 April 2004, dealing with a Hitachi business method patent on an ‘automatic auction method’. The claims of that patent application (97 306 722.6) covered the algorithm of performing an auction over a computer network using client-server technology. The actual nature of this patent isn’t important, anyway — but what is important is how the Boards of Appeal judge its ‘technical’ characteristics.

The key section is 3.7, where the Board writes:

For these reasons the Board holds that, contrary to the examining division’s assessment, the apparatus of claim 3 is an invention within the meaning of Article 52(1) EPC since it comprises clearly technical features such as a “server computer”, “client computers” and a “network”.

So in other words, if the idea of a computer network is involved in the claims of a patent, it ‘includes technical aspects’. It then goes on to discuss other technical characteristics that may appear in patents:

The Board is aware that its comparatively broad interpretation of the term “invention” in Article 52(1) EPC will include activities which are so familiar that their technical character tends to be overlooked, such as the act of writing using pen and paper.

So even writing with a pen and paper has technical character!

It’s a cop-out, designed to fool MEPs and citizens into thinking that a reasonable limitation is being placed on what can be patented, when in reality there’s effectively no limits, if there’s any kind of equipment involved beyond counting on your fingers.

The only way to be sure is to ensure the directive as it eventually passes is crystal clear on this point, with the help of the amendments that the pro-patent side are so keen to throw out.

(BTW, I found this link via RMS’ great article in the Guardian where he discusses software patenting using literature as an analogy. recommended reading!)

Latest Script Hack: utf8lint

Perl: double-encoding is a frequent problem when dealing with UTF-8 text, where a UTF-8 string is treated as (typically) ISO Latin-1, and is re-encoded.

utf8lint is a quick hack script which uses perl’s Encode module to detect this. Feed it your data on STDIN, and it’ll flag lines that contain text which may be doubly-encoded UTF-8, in a lintish way.

BSA Spams Patent Holders

Patents: An anonymous contributor writes:

‘I just received this letter and these pre-addressed postcards in the post this morning. I was surprised when I saw the envelope, because I’d never received anything from the BSA before. It turned out that they had extracted my name and address from the European Patents database, because I registered a software patent once. So a lot of these letters have been probably been sent out.

According to the letter, from Francisco Mingorance, the draft directive is being turned around to ‘rob small businesses of their intellectual property assets’.

I find it hard to see how that could be true. However the BSA’s letter has an important message you should heed – it is critical to contact your European representatives (your MEP and your country’s Commissioner) within the next two weeks. Let them know that the European Union should curtail software patents for once and for all.

Get out your best stationery and write to your MEP at the address given on this page.

Make sure your message is short and clear. SME’s don’t benefit from patents. Few patents are held by SME’s and the cost of applying for, maintaining and defending them is crippling.’

jm: I would suggest noting that you support the position of rapporteur
Michel Rocard MEP, and/or the FFII — details here. Please do write!

BTW, the contributor also offers: ‘if anyone is interested in doctoring up the BSA postcards, I can provide the hi-res scans.’ ;)

Amazing article series on Climate Change

Science: in April and May, the New Yorker printed an amazing series of articles on climate change by Elizabeth Kolbert, full of outstanding research and interviews with the key players.

Unlike much coverage, it includes the expected results of climate change in the US:

Different climate models offer very different predictions about future water availability; in the paper, Rind applied the criteria used in the Palmer index to GISS’s model and also to a model operated by NOAA’s Geophysical Fluid Dynamics Laboratory. He found that as carbon-dioxide levels rose the world began to experience more and more serious water shortages, starting near the equator and then spreading toward the poles. When he applied the index to the giss model for doubled CO2, it showed most of the continental United States to be suffering under severe drought conditions. When he applied the index to the G.F.D.L. model, the results were even more dire. Rind created two maps to illustrate these findings. Yellow represented a forty-to-sixty-per-cent chance of summertime drought, ochre a sixty-to-eighty-per-cent chance, and brown an eighty-to-a-hundred-per-cent chance. In the first map, showing the GISS results, the Northeast was yellow, the Midwest was ochre, and the Rocky Mountain states and California were brown. In the second, showing the G.F.D.L. results, brown covered practically the entire country.

‘I gave a talk based on these drought indices out in California to water-resource managers,’ Rind told me. ‘And they said, ‘Well, if that happens, forget it.’ There’s just no way they could deal with that.’

He went on, ‘Obviously, if you get drought indices like these, there’s no adaptation that’s possible. But let’s say it’s not that severe. What adaptation are we talking about? Adaptation in 2020? Adaptation in 2040? Adaptation in 2060? Because the way the models project this, as global warming gets going, once you’ve adapted to one decade you’re going to have to change everything the next decade.

And how the anti-climate-change side are attempting to control US public opinion:

The pollster Frank Luntz prepared a strategy memo for Republican members of Congress, coaching them on how to deal with a variety of environmental issues. (Luntz, who first made a name for himself by helping to craft Newt Gingrich’s ‘Contract with America,’ has been described as ‘a political consultant viewed by Republicans as King Arthur viewed Merlin.’) Under the heading ‘Winning the Global Warming Debate,’ Luntz wrote, ‘The scientific debate is closing (against us) but not yet closed. There is still a window of opportunity to challenge the science.’ He warned, ‘Voters believe that there is no consensus about global warming in the scientific community. Should the public come to believe that the scientific issues are settled, their views about global warming will change accordingly.’

They’re a great synthesis. Go read the articles — part 1 (‘Disappearing islands, thawing permafrost, melting polar ice. How the earth is changing’), part 2 (‘The curse of Akkad’), and part 3 (‘What can be done?’). They’re long, but if you’re still on the fence about this one, they’ll wake you up.

Bayesian learning animation

Spam: via John Graham-Cumming‘s excellent anti-spam newsletter this month, comes a very cool animation of the dbacl Bayesian anti-spam filter being trained to classify a mail corpus. Here’s the animation:

And Laird’s explanation:

dbacl computes two scores for each document, a ham score and a spam score. Technically, each score is a kind of distance, and the best category for a document is the lowest scoring one. One way to define the spamminess is to take the numerical difference of these scores.

Each point in the picture is one document, with the ham score on the x-axis and the spam score on the y-axis. If a point falls on the diagonal y=x, then its scores are identical and both categories are equally likely. If the point is below the diagonal, then the classifier must mark it as spam, and above the diagonal it marks it as ham.

The points are colour coded. When a document is learned we draw a square (blue for ham, red for spam). The picture shows the current scores of both the training documents, and the as yet unknown documents in the SA corpus. The unknown documents are either cyan (we know it’s ham but the classifier doesn’t), magenta (spam), or black. Black means that at the current state of learning, the document would be misclassified, because it falls on the wrong side of the diagonal. We don’t distinguish the types of errors. Only we know the point is black, the classifier doesn’t.

At time zero, when nothing has been learned, all the points are on the diagonal, because the two categories are symmetric.

Over time, the points move because the classifier’s probabilities change a little every time training occurs, and the clouds of points give an overall picture of what dbacl thinks of the unknown points. Of course, the more documents are learned, the fewer unknown points are left.

This is an excellent visualisation of the process, and demonstrates nicely what happens when you train a Bayesian spam-filter. You can clearly see the ‘unsure’ classifications becoming more reliable as the training corpus size increases. Very nice work!

It’s interesting to note the effects of an unbalanced corpus early on; a lot of spam training and little ham training results in a noticeable bias towards the classifier returning a spam classification.

Flickr as a ‘TypePad service for groups’

Web: a while back, I posted some musings about a web service to help authenticate users as members of a private group, similarly to how TypeKey authenticates users in general.

Well, Flickr have just posted this draft authentication API which does this very nicely — it now allows third-party web apps to authenticate against Flickr, TypeKey-style, and perform a limited subset of actions on the user’s behalf.

This means that using Flickr as a group authentication web service is now doable, as far as I can see…