Spam issues and how to overcome them
Homer Dokes
hdokes at mail.inct.net
Sat Jun 11 16:46:03 CEST 2016
Greetings all,
So after having employed two kolab servers for over a year now, spam is
still a huge problem.
I have found it very difficult to understand how kolab is employing the
tools to combat spam through the server and I can find nothing but
generalities when it comes to configuring for a sound anti-spam
regiment. I can find some actual configurations for earlier versions
than Kolab 3.4 but it is obvious they don't apply to 3.4 due to changes
in naming conventions, locations, etc. so while giving 'some' idea of
how to configure it... it's a guessing game on what and how it applies
to Kolab 3.4.
Allow me to review my experiences thus far and some actual issues and
results.
I have two servers running Kolab. One is in a world wide retail
environment, the other a localized service environment.
Current conditions:
Debian 7.0 (Wheesy)
Kolab 3.4 with the latest updates as of 6/11/2016
Amavis-new
Spamassissin
Razor
Pyzor
Clamav
Sieve
Utilization of Spam block lists
I have employed most of the tactics described in this document
https://lists.kolab.org/pipermail/users/2015-September/019923.html but
still have insurmountable amounts of spam making it through the system.
The two servers have been in place and fully functional for over a
year. The spam configurations have been running with the latest
definitions and settings for over 4 weeks.
I have employed bayes rules, downloaded pre-definitions for them, and
continue to use sa-learn on a daily basis through 150+ email boxes to
'learn' what is spam through the users junk boxes but it has made
absolutely no difference. The same emails keep coming through and the
spam scoring is all over the map. No consistency to it at all. Here is
the header of an example of a spam that come through many times a day,
has 100's of entries in the Junk folders of users, and yet continues to
enjoy a spam score of 1.342... far below the recommended threshold of
6.31 which is the initial default of the configuration and certainly
well below the 3.0 that I set trying to get closer to the scores the
spam emails are getting.:
Return-Path:
<2472-838548814-88-recipient=yadayada.com at mail.elementdooraim.com>
Received: from mail.yadayada.com ([unix socket])
by mail (Cyrus git2.5+0-Debian-2.5~dev2015021301-0~kolab1) with LMTPA;
Sat, 11 Jun 2016 08:46:54 -0400
X-Sieve: CMU Sieve 2.4
X-Virus-Scanned: Debian amavisd-new at yadayada.com
X-Spam-Flag: NO
X-Spam-Score: 1.342
X-Spam-Level: *
X-Spam-Status: No, score=1.342 tagged_above=-10 required=3
tests=[BAYES_00=-1.9, DKIM_SIGNED=0.1, DKIM_VALID=-0.1,
DKIM_VALID_AU=-0.1, HTML_IMAGE_ONLY_16=1.092, HTML_MESSAGE=0.001,
HTML_SHORT_LINK_IMG_2=0.001, MPART_ALT_DIFF=0.79,
RCVD_IN_BRBL_LASTEXT=1.449, SPF_PASS=-0.001, T_REMOTE_IMAGE=0.01]
autolearn=no
Received: from maria.elementdooraim.com (64-16-218-71.static.sagonet.net
[64.16.218.71])
by mail.yadayada.com (Postfix) with ESMTP id 8B8EF53C8
for <recipient at yadayada.com>; Sat, 11 Jun 2016 08:46:50 -0400 (EDT)
DKIM-Signature: v=1; a=rsa-sha1; c=relaxed/relaxed; s=k1;
d=elementdooraim.com;
h=Mime-Version:Content-Type:Date:From:Reply-To:Subject:To:Message-ID;
i=info at elementdooraim.com; bh=Y/a1tdkArMQ8RCID0h3i1qWZh7k=;
b=QcQOWDYWhfBwK0oWa4dx1Q5kzLf9CATzFNWO4T5rk1cRPWC3UkqZb3eeQKkN+fOx+J7WrG4YrX4d
e0Lb83zfjy9ppabQL9c3Xq1TX7EURamDq2vQDgW1wlBu1XNsh9xMjXj/9MLVZ5lzqrT04i5XiAcM
aX5d/tFQyXonE9SZPPQ=
DomainKey-Signature: a=rsa-sha1; c=nofws; q=dns; s=k1; d=elementdooraim.com;
b=Tn1vY7j32iXCGJRBVwMVwf3cOhFw8Zi8UsrG/mJ2fEhPVotOCQFSQJVnoxEqG26G6Io9zebXzw1y
sOeFozxSf6+bmvOpMXdyYI4TSNxudp5PnKeLquFIVEh8WfvHvON8b3Hc5ZwW4cgDptLM4z1yv9NV
n66xK1DMjzeO58bQ00c=;
Mime-Version: 1.0
Content-Type: multipart/alternative;
boundary="18112c6dd97e31c483b0c78bfc6a8313"
Date: Sat, 11 Jun 2016 05:42:13 -0700
From: "x-700 Pocket Flashlight" <info at elementdooraim.com>
Reply-To: "x700 Pocket Flashlight" <info at elementdooraim.com>
Subject: DEADLY Pocket Flashlight (A Must Have)!
To: <recipient at yadayada.com>
Message-ID: <0.0.838548814.teuwyd31fb3d4ecjsafp461081.0 at elementdooraim.com>
X-Wallace-Footer: YES
One would have thought that the range of the spam scores would start
from zero and move in a positive direction however I have actually seen
spam scores with a negative value. What IS the range of the score?
What is it's lowest point and what is it's highest point and how does it
get calculated?
I have also recognized that most of the spam comes through a previous
FQDN which, while it hasn't been used for years, we still get valid
email to this address and therefore it has been embedded for every user
in their email box set up as a secondary domain. As such I set up sieve
rules to push all emails going to that address into it's own folder for
each user, only to realize that it is only moving about 50% of the
emails addressed to that domain to the folder that was set up. The
other 50% still end up in their main inbox. How is this possible? The
sieve rule is based ONLY on the 'To:' address and there is only the
users address with the old domain in that field. How does it work 50%
of the time and 50% not?
I have a tremendous number of pissed users because they spend more time
sifting then addressing legitimate emails. I'd be better off defining
go/no go folders that when an email is placed into the 'no go' as an
example, it is blacklisted and never allowed to come through again but I
can find no information with Kolab references on how to accomplish
this. Is Kolab capable of setting up for the user a black and white
list through roundcubemail. If so can someone point me to a tutorial or
example of a configuration?
Can an administrator of Kolab look to the individual package's own
website documentation for configuration or because of the 'fit' into
Kolab 3.4 are those configurations meaningless? Example... I understand
that running spamd is NOT what you want to do in Kolab 3.4 because
Amavis-new actually contains some of the libraries of Spamassassin and
makes calls implicitly for Spamassassin features and does not work with
spamd at all. That alone seems to throw all the individual package's
documentation out the window as we are starting from the same base.
I have owned and ran an ISP for 15 years and dissolved it 18 months ago
and have used a wide variety of email server platforms. After the ISP,
I decided to take the plunge into Kolab but having administered it over
the last year I've really called into question it's viability as a sound
and easily maintained email platform. Quite the contrary, I have found
it to demand more of my time than any other platform I have used.
Should it be this way? Am I overlooking something? In the end... it is
really the lack of consistent and applicable documentation for the Kolab
environment that has made the experience so exasperating. I am certain
that the package over all can be and probably is a sound package, but if
one can not find the documentation that speaks to the uniqueness that is
Kolab, how does one come out of it with a positive take?
In the end, what I am looking for is how does kolab 'alter' the methods
of the anti-spam tools (amavis-new, spamassassin, razor, pyzor, etc),
from a wrapper and configuration standpoint, from their respective
'stand alone' configurations. Is there a kolab version specific
reference for a functional spam configuration. I am continually
surprised at what appears to be a tremendously inadequate repository of
information for Kolab (specifically 3.4) vs. the number of users the
platform has out there. I know I can't be the only one experiencing
these issues, or, is it that I just haven't found the 'holy grail'
repository of Kolab 3.4 information.
I would appreciate any assistance I can get here with this. I am to far
invested into the Kolab platform at this time to drop it and move to
something else.
Thank you,
hdokes
More information about the users
mailing list