SpamHash/NaiveHamTest

From ThorxWiki
(Difference between revisions)
Jump to: navigation, search
(some analysis)
(adding in space for threshold 90 in table)
 
(5 intermediate revisions by one user not shown)
Line 31: Line 31:
   
 
= Test machine =
 
= Test machine =
933Mhz PIII
+
== Hardware ==
  +
* 933Mhz PIII
  +
* 2x 6gig QUANTUM FIREBALL
  +
* 382meg RAM
  +
== Software ==
  +
Note that the first HD has WindowsXP installed. This is second drive
  +
* Debian GNU/Linux 4.0
  +
* Kernel: 2.6.18-4-686
  +
* 900meg SWAP partition
   
 
= Results =
 
= Results =
Line 38: Line 38:
 
!Spamsum threshold (-Txx) ||Including headers || Excluding headers (-H)
 
!Spamsum threshold (-Txx) ||Including headers || Excluding headers (-H)
 
|-
 
|-
|'''25''' || to be tested ||
+
|'''25''' ||
 
/usr/bin/time:
 
/usr/bin/time:
 
18878.18user 1146.58system
 
18878.18user 1146.58system
Line 49: Line 49:
 
* catches: 31096
 
* catches: 31096
 
One word: abysmal
 
One word: abysmal
  +
|| to be tested
 
|-
 
|-
 
||'''50''' ||
 
||'''50''' ||
Line 63: Line 64:
 
|| to be tested
 
|| to be tested
 
|-
 
|-
||'''75''' || to be tested ||
+
||'''75''' ||
 
/usr/bin/time
 
/usr/bin/time
85721.13user 1393.59system
+
85659.36user 1381.94system
27:08:07elapsed 1%CPU
+
27:06:11elapsed 1%CPU
 
(0avgtext+0avgdata 0maxresident)k
 
(0avgtext+0avgdata 0maxresident)k
 
0inputs+0outputs
 
0inputs+0outputs
(63963major+75559066minor)pagefaults
+
(63674major+75251824minor)pagefaults
 
0swaps
 
0swaps
 
* hashes: 60080
 
* hashes: 60080
* catches: 1199
+
* catches: 1197
 
Oneline summary: 2% false positives = not acceptable
 
Oneline summary: 2% false positives = not acceptable
  +
||
  +
/usr/bin/time
  +
36731.38user 1341.55system
  +
13:23:14elapsed 78%CPU
  +
(0avgtext+0avgdata 0maxresident)k
  +
0inputs+0outputs
  +
(65056major+76551410minor)pagefaults
  +
0swaps
  +
* hashes: 52189
  +
* catches: 9088
  +
Oneline summary: 14% = always use header protection
  +
|-
  +
||'''90'''|| to be tested || to be tested
 
|}
 
|}
   
Line 82: Line 96:
 
* Higher threshold runs take longer to process. This is because the higher the threshold, the more mail is saved to the hashes bucket. Since every mail is hashed and compared to the hashes bucket, this comparison time increases with the threshold. (the parent script then in turn has less relative time spent on itself, and so %CPU falls as threshold increases.
 
* Higher threshold runs take longer to process. This is because the higher the threshold, the more mail is saved to the hashes bucket. Since every mail is hashed and compared to the hashes bucket, this comparison time increases with the threshold. (the parent script then in turn has less relative time spent on itself, and so %CPU falls as threshold increases.
   
* -T25 and -T50 are terrible. Even -T75 is not acceptible when run over a ham corpus. This increasingly indicates that a spampot is required for an effective SpamHash system
+
* -T25 and -T50 are terrible. Even -T75 is not acceptible when run over a ham corpus. This increasingly indicates that a spampot is required for an effective SpamHash system.

Latest revision as of 01:32, 8 April 2009

Contents

[edit] Testing SpamHash on a ham stream, naive start.

[edit] Why?

I expect ham to be, by it's very nature, unique and thus immune to the similarity testing that spamsum performs. If this can be proved to be, then spamhash could be run native on an incoming mail stream, and not require a spampot.

[edit] Original notes

  • Is a spampot even nescessary? Couldn't this simply be run on a complete email dataset? Afterall, it works by allowing through the first instance of every unique email anyway, and ham tends to be relatively unique, whilst spam tends to come in repetitive sets...
    • Yes... in simple testing, simply quoting an email in response makes it quite dissimilar, and their reply (which should be the next that spamsum sees) will have two levels of reply! (TODO: get numbers)
    • TODO: test simply by feeding a weeks corpus of ALL my regular email through spamsum, simulating this.
      • Do this twice: Once naively, once with pre-learnt hashDB from the spampot
      • Then do it another way: over a known 100% ham corpus? (save a corpus of ham messages to MH or maildir format)
      • Expectation: this will be effective, except possibly for email memes. (if the same funny picture is sent to you twice, even by different people, they will be base64 encoded the same and thus show up as being EXTREMELY similar (how common this is should show up in the 100% ham corpus test)

[edit] Ham Corpus

The ham corpus is my personal email archive.

I combined most mailfolders I currently have (the only notable exception being the archive from an nntp->smtp gateway, as this cannot be considered representative of genuine email. The nntp archive represented some 50,000 messages!) All other mail (personal, email lists, etc) was added in, then sent mail filtered out (via mutt's $alternates). Some obvious spam which had crept in (mainly via one buggy list) were also removed, but this cannot be said to be comprehensive. The resulting archive was then cropped to the 11year window of 1998 to 2008 inclusive.

This filtering combined to reduce my personal mail archive from approx 120000 messages to 61277 (637MB in mbox format).

Finally, the mail was saved to Maildir format with modified filename. The format used is: "YYYYMMDD-HH:MM:SS.<string>.8charmd5:2,". This allows for simple commandline listing of messages in chronological order. We set <string> to name the corpus - allowing for future mixing of ham and spam and spampot corpuses without losing original per-message definitions.

[edit] Test procedure

The ham corpus was run through a script to simulate chronological delivery and filtering via procmail.

  • This was run several times:
    • each subsequent test twice: once with and without spamsum's "-H" option (ignore email headers)
    • threshold scores of 25, 50 and 75 (and possibly others dependant on the results seen here, and in equiv spam corpus testing

[edit] Test machine

[edit] Hardware

  • 933Mhz PIII
  • 2x 6gig QUANTUM FIREBALL
  • 382meg RAM

[edit] Software

Note that the first HD has WindowsXP installed. This is second drive

  • Debian GNU/Linux 4.0
  • Kernel: 2.6.18-4-686
  • 900meg SWAP partition

[edit] Results

Spamsum threshold (-Txx) Including headers Excluding headers (-H)
25

/usr/bin/time:

18878.18user 1146.58system
6:56:08elapsed 80%CPU
(0avgtext+0avgdata 0maxresident)k
0inputs+0outputs
(63875major+75779350minor)pagefaults
0swaps
  • hashes: 30181
  • catches: 31096

One word: abysmal

to be tested
50

/usr/bin/time:

44472.83user 1264.18system
15:32:24elapsed 4%CPU
(0avgtext+0avgdata 0maxresident)k
0inputs+0outputs
(64143major+75657421minor)pagefaults
0swaps
  • hashes: 44070
  • catches: 17207

One word result: terrible

to be tested
75

/usr/bin/time

85659.36user 1381.94system
27:06:11elapsed 1%CPU
(0avgtext+0avgdata 0maxresident)k
0inputs+0outputs
(63674major+75251824minor)pagefaults
0swaps
  • hashes: 60080
  • catches: 1197

Oneline summary: 2% false positives = not acceptable

/usr/bin/time

36731.38user 1341.55system
13:23:14elapsed 78%CPU
(0avgtext+0avgdata 0maxresident)k
0inputs+0outputs
(65056major+76551410minor)pagefaults
0swaps
  • hashes: 52189
  • catches: 9088

Oneline summary: 14% = always use header protection

90 to be tested to be tested

Raw result data available on request

[edit] Analysis

  • Higher threshold runs take longer to process. This is because the higher the threshold, the more mail is saved to the hashes bucket. Since every mail is hashed and compared to the hashes bucket, this comparison time increases with the threshold. (the parent script then in turn has less relative time spent on itself, and so %CPU falls as threshold increases.
  • -T25 and -T50 are terrible. Even -T75 is not acceptible when run over a ham corpus. This increasingly indicates that a spampot is required for an effective SpamHash system.
Personal tools
Namespaces

Variants
Actions
Navigation
meta navigation
More thorx
Tools