Centralized email reputation services that rely on a small number of trusted nodes to detect and report spammers, e.g., SpamHaus, are being challenged by the increasing scale and sophistication of botnets. In particular, spammers employ multiple malicious hosts, each for a short period of time. In turn, those hosts spam multiple domains for short periods. The above strategies reduce the effectiveness of spam detection from a small number of vantage points. Moreover, several of these services require paid subscription (e.g., CloudMark and TrustedSource.)
Motivated by the shortcomings in terms of effectiveness and cost of the above email reputation services, researchers have proposed open and collaborative peer-to-peer spam filtering platforms, e.g., ALPACAS. These collaborative systems assume compliant behavior from all participating spam reporting nodes, i.e., that nodes submit truthful reports regarding spammers. However, this is often an unrealistic assumption given that these nodes may belong to distinct trust domains.
A recent collaborative spam email sender detection system, RepuScore employs trust inference to weigh spammer reports according to the trustworthiness of their reporters. However it is still susceptible to Sybil attacks.
To this end, we propose SocialFilter, a trust-aware collaborative spam mitigation system. \social enables nodes with no email classification functionality to query the network on whether a host is a spammer. It employs Sybil-resilient trust inference to weigh the reports concerning spamming hosts that collaborating spam-detecting nodes (reporters) submit to the system. It weighs the spam reports according to the trustworthiness of their reporters to derive a measure of the system's belief that a host is a spammer. SocialFilter is the first collaborative unwanted traffic mitigation system that assesses the trustworthiness of spam reporters by both auditing their reports and by leveraging the social network of the reporters' administrators.
The design and evaluation of SocialFilter offers us the following lessons: a) it is plausible to introduce Sybil-resilient Online-Social-Network-based trust inference mechanisms to improve the reliability and the attack-resistance of collaborative spam mitigation; b) using social links to obtain the trustworthiness of reports concerning spammers can result in comparable spam-blocking effectiveness with approaches that use social links to rate-limit spam (e.g., Ostra); c) unlike Ostra, in the absence of reports that incriminate benign email senders, SocialFilter yields no false positives.
msirivi AT cs dt tid dt es
kyungbak AT uci dt edu