What Is SPAM And How Not To Be A SPAMMER

Email spam, also known as junk email or unsolicited bulk email (UBE), is a subset of electronic spam involving nearly identical messages sent to numerous recipients by email. The messages may contain disguised links that appear to be for familiar websites but in fact lead to phishing web sites or sites that are hosting malware. Spam email may also include malware as scripts or other executable file attachments. Definitions of spam usually include the aspects that email is unsolicited and sent in bulk. One subset of UBE is UCE (unsolicited commercial email). The opposite of “spam”, email which one wants, is sometimes called “ham”. Like other forms of unwanted bulk messaging, it is named for Spam luncheon meat by way of a Monty Python sketch in which Spam is depicted as ubiquitous and unavoidable.

Email spam has steadily grown since the early 1990s. Botnets, networks of virus-infected computers, are used to send about 80% of spam. Since the expense of the spam is borne mostly by the recipient, it is effectively postage due advertising.

The legal status of spam varies from one jurisdiction to another. In the United States, spam was declared to be legal by the CAN-SPAM Act of 2003 provided the message adheres to certain specifications. ISPs have attempted to recover the cost of spam through lawsuits against spammers, although they have been mostly unsuccessful in collecting damages despite winning in court.

Spammers collect email addresses from chatrooms, websites, customer lists, newsgroups, and viruses which harvest users’ address books, and are sold to other spammers. They also use a practice known as “email appending” or “epending” in which they use known information about their target (such as a postal address) to search for the target’s email address. Much of spam is sent to invalid email addresses. According to the Message Anti-Abuse Working Group, the amount of spam email was between 88–92% of email messages sent in the first half of 2010.

Overview

From the beginning of the Internet (the ARPANET), sending of junk email has been prohibited. Gary Thuerk sent the first email spam message in 1978 to 600 people. He was reprimanded and told not to do it again. The ban on spam is enforced by the Terms of Service/Acceptable Use Policy (ToS/AUP) of internet service providers (ISPs) and peer pressure. Even with a thousand users junk email for advertising is not tenable, and with a million users it is not only impractical, but also expensive. It was estimated that spam cost businesses on the order of $100 billion in 2007. As the scale of the spam problem has grown, ISPs and the public have turned to government for relief from spam, which has failed to materialize.

Types

Spam has several definitions varying by source.

  • Unsolicited bulk email (UBE)—unsolicited email, sent in large quantities.
  • Unsolicited commercial email (UCE)—this more restrictive definition is used by regulators whose mandate is to regulate commerce, such as the U.S. Federal Trade Commission.

Spamvertised sites

Many spam emails contain URLs to a website or websites. According to a Cyberoam report in 2014, there are an average of 54 billion spam messages sent every day. “Pharmaceutical products (Viagra and the like) jumped up 45% from last quarter’s analysis, leading this quarter’s spam pack. Emails purporting to offer jobs with fast, easy cash come in at number two, accounting for approximately 15% of all spam email. And, rounding off at number three are spam emails about diet products (such as Garcinia gummi-gutta or Garcinia Cambogia), accounting for approximately 1%.”

419 scams

Advance fee fraud spam such as the Nigerian “419” scam may be sent by a single individual from a cybercafé in a developing country. Organized “spam gangs” operate from sites set up by the Russian mafia, with turf battles and revenge killings sometimes resulting.

Phishing

Spam is also a medium for fraudsters to scam users into entering personal information on fake Web sites using emails forged to look like they are from banks or other organizations, such as PayPal. This is known as phishing. Targeted phishing, where known information about the recipient is used to create forged emails, is known as spear-phishing.

Spam techniques

Appending

If a marketer has one database containing names, addresses, and telephone numbers of customers, they can pay to have their database matched against an external database containing email addresses. The company then has the means to send email to people who have not requested email, which may include people who have deliberately withheld their email address.

Image spam

Image spam, or image-based spam, is an obfuscating method in which the text of the message is stored as a GIF or JPEG image and displayed in the email. This prevents text-based spam filters from detecting and blocking spam messages. Image spam was reportedly used in the mid-2000s to advertise “pump and dump” stocks.

Often, image spam contains nonsensical, computer-generated text which simply annoys the reader. However, new technology in some programs tries to read the images by attempting to find text in these images. These programs are not very accurate, and sometimes filter out innocent images of products, such as a box that has words on it.

A newer technique, however, is to use an animated GIF image that does not contain clear text in its initial frame, or to contort the shapes of letters in the image (as in CAPTCHA) to avoid detection by optical character recognition tools.

Blank spam

Blank spam is spam lacking a payload advertisement. Often the message body is missing altogether, as well as the subject line. Still, it fits the definition of spam because of its nature as bulk and unsolicited email.

Blank spam may be originated in different ways, either intentional or unintentionally:

  1. Blank spam can have been sent in a directory harvest attack, a form of dictionary attack for gathering valid addresses from an email service provider. Since the goal in such an attack is to use the bounces to separate invalid addresses from the valid ones, spammers may dispense with most elements of the header and the entire message body, and still accomplish their goals.
  2. Blank spam may also occur when a spammer forgets or otherwise fails to add the payload when he or she sets up the spam run.
  3. Often blank spam headers appear truncated, suggesting that computer glitches may have contributed to this problem—from poorly written spam software to malfunctioning relay servers, or any problems that may truncate header lines from the message body.
  4. Some spam may appear to be blank when in fact it is not. An example of this is the VBS.Davinia.B email worm which propagates through messages that have no subject line and appears blank, when in fact it uses HTML code to download other files.

Backscatter spam

Backscatter is a side-effect of email spam, viruses and worms, where email servers receiving spam and other mail send bounce messages to an innocent party. This occurs because the original message’s envelope sender is forged to contain the email address of the victim. A very large proportion of such email is sent with a forged From: header, matching the envelope sender.

Since these messages were not solicited by the recipients, are substantially similar to each other, and are delivered in bulk quantities, they qualify as unsolicited bulk email or spam. As such, systems that generate email backscatter can end up being listed on various DNSBLs and be in violation of internet service providers’ Terms of Service.

Legality

Sending spam violates the acceptable use policy (AUP) of almost all Internet service providers. Providers vary in their willingness or ability to enforce their AUPs. Some actively enforce their terms and terminate spammers’ accounts without warning. Some ISPs lack adequate personnel or technical skills for enforcement, while others may be reluctant to enforce restrictive terms against profitable customers.

As the recipient directly bears the cost of delivery, storage, and processing, one could regard spam as the electronic equivalent of “postage-due” junk mail. Due to the low cost of sending unsolicited email and the potential profit entailed, some believe that only strict legal enforcement can stop junk email. The Coalition Against Unsolicited Commercial Email (CAUCE) argues “Today, much of the spam volume is sent by career criminals and malicious hackers who won’t stop until they’re all rounded up and put in jail.”

European Union

All the countries of the European Union have passed laws that specifically target spam.

Article 13 of the European Union Directive on Privacy and Electronic Communications (2002/58/EC) provides that the EU member states shall take appropriate measures to ensure that unsolicited communications for the purposes of direct marketing are not allowed either without the consent of the subscribers concerned or in respect of subscribers who do not wish to receive these communications, the choice between these options to be determined by national legislation.

In the United Kingdom, for example, unsolicited emails cannot be sent to an individual subscriber unless prior permission has been obtained or unless there is a previous relationship between the parties. The regulations can be enforced against an offending company or individual anywhere in the European Union. The Information Commissioner’s Office has responsibility for the enforcement of unsolicited emails and considers complaints about breaches. A breach of an enforcement notice is a criminal offense subject to a fine of up to £500,000.

Canada

The Government of Canada has passed anti-spam legislation called the Fighting Internet and Wireless Spam Act to fight spam.

Australia

In Australia, the relevant legislation is the Spam Act 2003, which covers some types of email and phone spam and took effect on 11 April 2004. The Spam Act provides that “Unsolicited commercial electronic messages must not be sent.” Whether an email is unsolicited depends on whether the sender has consent. Consent can be express or inferred. Express consent is when someone directly instructs a sender to send them emails, e.g. by opting in. Consent can also be inferred from the business relationship between the sender and recipient or if the recipient conspicuously publishes their email address in a public place (such as on a website). Penalties are up to 10,000 penalty units, or 2,000 penalty units for a person other than a body corporate.

United States

In the United States, most states enacted anti-spam laws during the late 1990s and early 2000s. Many of these have since been pre-empted by the less restrictive CAN-SPAM Act of 2003 (“CAN-SPAM“).

Spam is legally permissible according to CAN-SPAM, provided it meets certain criteria: a “truthful” subject line, no forged information in the technical headers or sender address, and other minor requirements. If the spam fails to comply with any of these requirements it is illegal. Aggravated or accelerated penalties apply if the spammer harvested the email addresses using methods described earlier.

A review of the effectiveness of CAN-SPAM in 2005 by the Federal Trade Commission (the agency charged with CAN-SPAM enforcement) stated that the amount of sexually explicit spam had significantly decreased since 2003 and the total volume had begun to level off. Senator Conrad Burns, a principal sponsor, noted that “Enforcement is key regarding the CAN-SPAM legislation.” In 2004, less than one percent of spam complied with CAN-SPAM. In contrast to the FTC evaluation, many observers view CAN-SPAM as having failed in its purpose of reducing spam.

Other laws

Accessing privately owned computer resources without the owner’s permission is illegal under computer crime statutes in most nations. Deliberate spreading of computer viruses is also illegal in the United States and elsewhere. Thus, some common behaviors of spammers are criminal regardless of the legality of spamming per se. Even before the advent of laws specifically banning or regulating spamming, spammers were successfully prosecuted under computer fraud and abuse laws for wrongfully using others’ computers.

The use of botnets can be perceived as theft. The spammer consumes a zombie owner’s bandwidth and resources without any cost. In addition, spam is perceived as theft of services. The receiving SMTP servers consume significant amounts of system resources dealing with this unwanted traffic. As a result, service providers have to spend large amounts of money to make their systems capable of handling these amounts of email. Such costs are inevitably passed on to the service providers’ customers.

Other laws, not only those related to spam, have been used to prosecute alleged spammers. For example, Alan Ralsky was indicted on stock fraud charges in January 2008, and Robert Soloway pled guilty in March 2008 to charges of mail fraud, fraud in connection with email, and failing to file a tax return.

Deception and fraud

Spammers may engage in deliberate fraud to send out their messages. Spammers often use false names, addresses, phone numbers, and other contact information to set up “disposable” accounts at various Internet service providers. They also often use falsified or stolen credit card numbers to pay for these accounts. This allows them to move quickly from one account to the next as the host ISPs discover and shut down each one.

Senders may go to great lengths to conceal the origin of their messages. Large companies may hire another firm to send their messages so that complaints or blocking of email falls on a third party. Others engage in spoofing of email addresses (much easier than IP address spoofing). The email protocol (SMTP) has no authentication by default, so the spammer can pretend to originate a message apparently from any email address. To prevent this, some ISPs and domains require the use of SMTP-AUTH, allowing positive identification of the specific account from which an email originates.

Senders cannot completely spoof email delivery chains (the ‘Received’ header), since the receiving mailserver records the actual connection from the last mailserver’s IP address. To counter this, some spammers forge additional delivery headers to make it appear as if the email had previously traversed many legitimate servers.

Spoofing can have serious consequences for legitimate email users. Not only can their email inboxes get clogged up with “undeliverable” emails in addition to volumes of spam, they can mistakenly be identified as a spammer. Not only may they receive irate email from spam victims, but (if spam victims report the email address owner to the ISP, for example) a naive ISP may terminate their service for spamming.

Theft of service

Spammers frequently seek out and make use of vulnerable third-party systems such as open mail relays and open proxy servers. SMTP forwards mail from one server to another—mail servers that ISPs run commonly require some form of authentication to ensure that the user is a customer of that ISP. Open relays, however, do not properly check who is using the mail server and pass all mail to the destination address, making it harder to track down spammers.

Increasingly, spammers use networks of malware-infected PCs (zombies) to send their spam. Zombie networks are also known as botnets (such zombifying malware is known as a bot, short for robot). In June 2006, an estimated 80 percent of email spam was sent by zombie PCs, an increase of 30 percentfrom the prior year. An estimated 55 billion email spam were sent each day in June 2006, an increase of 25 billion per day from June 2005.

For the first quarter of 2010, an estimated 305,000 newly activated zombie PCs were brought online each day for malicious activity. This number is slightly lower than the 312,000 of the fourth quarter of 2009.

Brazil produced the most zombies in the first quarter of 2010. Brazil was the source of 20 percent of all zombies, which is down from 14 percent from the fourth quarter of 2009. India had 10 percent, with Vietnam at 8 percent, and the Russian Federation at 7 percent.

Side effects

To combat the problems posed by botnets, open relays, and proxy servers, many email server administrators pre-emptively block dynamic IP ranges and impose stringent requirements on other servers wishing to deliver mail. Forward-confirmed reverse DNS must be correctly set for the outgoing mail server and large swaths of IP addresses are blocked, sometimes pre-emptively, to prevent spam. These measures can pose problems for those wanting to run a small email server off an inexpensive domestic connection. Blacklisting of IP ranges due to spam emanating from them also causes problems for legitimate email servers in the same IP range.

Statistics and estimates

The total volume of email spam has been consistently growing, but in 2011 the trend seems to have reversed. The amount of spam users see in their mailboxes is only a portion of total spam sent, since spammers’ lists often contain a large percentage of invalid addresses and many spam filters simply delete or reject “obvious spam.”

The first known spam email, advertising a DEC product presentation, was sent in 1978 by Gary Thuerk to 600 addresses, which was all the users of ARPANET at the time, though software limitations meant only slightly more than half of the intended recipients actually received it. As of August 2010, the amount of spam was estimated to be around 200 billion spam messages sent per day, More than 97% of all emails sent over the Internet are unwanted, according to a Microsoft security report. MAAWG estimates that 85% of incoming mail is “abusive email”, as of the second half of 2007. The sample size for the MAAWG’s study was over 100 million mailboxes.

A 2010 survey of US and European email users showed that 46% of the respondents had opened spam messages, although only 11% had clicked on a link.

Highest amount of spam received

According to Steve Ballmer, Microsoft founder Bill Gates receives four million emails per year, most of them spam. This was originally incorrectly reported as “per day”.

At the same time Jef Poskanzer, owner of the domain name acme.com, was receiving over one million spam emails per day.

Cost of spam

A 2004 survey estimated that lost productivity costs Internet users in the United States $21.58 billion annually, while another reported the cost at $17 billion, up from $11 billion in 2003. In 2004, the worldwide productivity cost of spam has been estimated to be $50 billion in 2005. An estimate of the percentage cost borne by the sender of marketing junk mail (snail mail) is 88 percent, whereas in 2001 one spam was estimated to cost $0.10 for the receiver and $0.00001 (0.01% of the cost) for the sender.

Origin of spam

Email spam relayed by country in Q2/2007.

Origin or source of spam refers to the geographical location of the computer from which the spam is sent; it is not the country where the spammer resides, nor the country that hosts the spamvertised site. Because of the international nature of spam, the spammer, the hijacked spam-sending computer, the spamvertised server, and the user target of the spam are all often located in different countries. As much as 80% of spam received by Internet users in North America and Europe can be traced to fewer than 200 spammers.

In terms of number of IP addresses: the Spamhaus Project (which measures spam sources in terms of number of IP addresses used for spamming, rather than volume of spam sent) ranks the top three as the United States, China, and Russia, followed by Japan, Canada, and South Korea.

In terms of networks: As of 5 June 2007, the three networks hosting the most spammers are Verizon, AT&T, and VSNL International. Verizon inherited many of these spam sources from its acquisition of MCI, specifically through the UUNet subsidiary of MCI, which Verizon subsequently renamed Verizon Business.

Anti-spam techniques

The U.S. Department of Energy Computer Incident Advisory Capability (CIAC) has provided specific countermeasures against email spamming.

Some popular methods for filtering and refusing spam include email filtering based on the content of the email, DNS-based blackhole lists (DNSBL), greylisting, spamtraps, enforcing technical requirements of email (SMTP), checksumming systems to detect bulk email, and by putting some sort of cost on the sender via a proof-of-work system or a micropayment. Each method has strengths and weaknesses and each is controversial because of its weaknesses. For example, one company’s offer to “[remove] some spamtrap and honeypot addresses” from email lists defeats the ability for those methods to identify spammers.

Outbound spam protection combines many of the techniques to scan messages exiting out of a service provider’s network, identify spam, and taking action such as blocking the message or shutting off the source of the message.

In one study, 95 percent of revenues (in the study) cleared through just three banks.

How spammers operate

Gathering of addresses

In order to send spam, spammers need to obtain the email addresses of the intended recipients. To this end, both spammers themselves and list merchants gather huge lists of potential email addresses. Since spam is, by definition, unsolicited, this address harvesting is done without the consent (and sometimes against the expressed will) of the address owners. As a consequence, spammers’ address lists are inaccurate. A single spam run may target tens of millions of possible addresses – many of which are invalid, malformed, or undeliverable.

Sometimes, if the sent spam is “bounced” or sent back to the sender by various programs that eliminate spam, or if the recipient clicks on an unsubscribe link, that may cause that email address to be marked as “valid”, which is interpreted by the spammer as “send me more”. This is illegal with the passage of anti-spam legislation, however. Thus a recipient should not automatically assume the unsubscribe link is an invitation to be sent more messages. If the originating company is legitimate and the content of the message is legitimate, then individuals should unsubscribe to messages they no longer wish to receive.

Delivering spam messages

Obfuscating message content

Many spam-filtering techniques work by searching for patterns in the headers or bodies of messages. For instance, a user may decide that all email they receive with the word “Viagra” in the subject line is spam, and instruct their mail program to automatically delete all such messages. To defeat such filters, the spammer may intentionally misspell commonly filtered words or insert other characters, often in a style similar to leetspeak, as in the following examples: V1agra, Via'gra, Vi@graa, vi*gra, \/iagra. This also allows for many different ways to express a given word, making identifying them all more difficult for filter software.

The principle of this method is to leave the word readable to humans (who can easily recognize the intended word for such misspellings), but not likely to be recognized by a literal computer program. This is only somewhat effective, because modern filter patterns have been designed to recognize blacklisted terms in the various iterations of misspelling. Other filters target the actual obfuscation methods, such as the non-standard use of punctuation or numerals into unusual places. Similarly, HTML-based email gives the spammer more tools to obfuscate text. Inserting HTML comments between letters can foil some filters, as can including text made invisible by setting the font color to white on a white background, or shrinking the font size to the smallest fine print. Another common ploy involves presenting the text as an image, which is either sent along or loaded from a remote server. This can be foiled by not permitting an email-program to load images.

As Bayesian filtering has become popular as a spam-filtering technique, spammers have started using methods to weaken it. To a rough approximation, Bayesian filters rely on word probabilities. If a message contains many words that are used only in spam, and few that are never used in spam, it is likely to be spam. To weaken Bayesian filters, some spammers, alongside the sales pitch, now include lines of irrelevant, random words, in a technique known as Bayesian poisoning. A variant on this tactic may be borrowed from the Usenet abuser known as “Hipcrime”—to include passages from books taken from Project Gutenberg, or nonsense sentences generated with “dissociated press” algorithms. Randomly generated phrases can create spoetry (spam poetry) or spam art. The perceived credibility of spam messages by users differs across cultures; for example, Korean unsolicited email frequently uses apologies, likely to be based on Koreans’ modeling behavior and a greater tendency to follow social norms.

Another method used to masquerade spam as legitimate messages is the use of autogenerated sender names in the From: field, ranging from realistic ones such as “Jackie F. Bird” to (either by mistake or intentionally) bizarre attention-grabbing names such as “Sloppiest U. Epiglottis” or “Attentively E. Behavioral”. Return addresses are also routinely auto-generated, often using unsuspecting domain owners’ legitimate domain names, leading some users to blame the innocent domain owners. Blocking lists use IP addresses rather than sender domain names, as these are more accurate. A mail purporting to be from example.com can be seen to be faked by looking for the originating IP address in the email’s headers; also Sender Policy Framework, for example, helps by stating that a certain domain will send email only from certain IP addresses.

Spam can also be hidden inside a fake “Undelivered mail notification” which looks like the failure notices sent by a mail transfer agent (a “MAILER-DAEMON”) when it encounters an error.

Spam-support services

A number of other online activities and business practices are considered by anti-spam activists to be connected to spamming. These are sometimes termed spam-support services: business services, other than the actual sending of spam itself, which permit the spammer to continue operating. Spam-support services can include processing orders for goods advertised in spam, hosting Web sites or DNS records referenced in spam messages, or a number of specific services as follows:

Some Internet hosting firms advertise bulk-friendly or bulletproof hosting. This means that, unlike most ISPs, they will not terminate a customer for spamming. These hosting firms operate as clients of larger ISPs, and many have eventually been taken offline by these larger ISPs as a result of complaints regarding spam activity. Thus, while a firm may advertise bulletproof hosting, it is ultimately unable to deliver without the connivance of its upstream ISP. However, some spammers have managed to get what is called a pink contract (see below) – a contract with the ISP that allows them to spam without being disconnected.

A few companies produce spamware, or software designed for spammers. Spamware varies widely, but may include the ability to import thousands of addresses, to generate random addresses, to insert fraudulent headers into messages, to use dozens or hundreds of mail servers simultaneously, and to make use of open relays. The sale of spamware is illegal in eight U.S. states.

So-called millions CDs are commonly advertised in spam. These are CD-ROMs purportedly containing lists of email addresses, for use in sending spam to these addresses. Such lists are also sold directly online, frequently with the false claim that the owners of the listed addresses have requested (or “opted in”) to be included. Such lists often contain invalid addresses. In recent years, these have fallen almost entirely out of use due to the low quality email addresses available on them, and because some email lists exceed 20GB in size. The amount you can fit on a CD is no longer substantial.

A number of DNS blacklists (DNSBLs), including the MAPS RBL, Spamhaus SBL, SORBS and SPEWS, target the providers of spam-support services as well as spammers. DNSBLs blacklist IPs or ranges of IPs to persuade ISPs to terminate services with known customers who are spammers or resell to spammers.

Related vocabulary

Unsolicited bulk email (UBE)
A synonym for email spam.
Unsolicited commercial email (UCE)
Spam promoting a commercial service or product. This is the most common type of spam, but it excludes spams that are hoaxes (e.g. virus warnings), political advocacy, religious messages and chain letters sent by a person to many other people. The term UCE may be most common in the USA.
Pink contract
A pink contract is a service contract offered by an ISP which offers bulk email service to spamming clients, in violation of that ISP’s publicly posted acceptable use policy.
Spamvertising
Spamvertising is advertising through the medium of spam.
Opt-in, confirmed opt-in, double opt-in, opt-out
Opt-in, confirmed opt-in, double opt-in, opt-out refers to whether the people on a mailing list are given the option to be put in, or taken out, of the list. Confirmation (and “double”, in marketing speak) refers to an email address transmitted e.g. through a web form being confirmed to actually request joining a mailing list, instead of being added to the list without verification.
Final, Ultimate Solution for the Spam Problem (FUSSP)
An ironic reference to naïve developers who believe they have invented the perfect spam filter, which will stop all spam from reaching users’ inboxes while deleting no legitimate email accidentally.
Bacn
Bacn is email that has been subscribed to and is therefore solicited. Bacn has been described as “email you want but not right now.” Some examples of common bacn messages are news alerts, periodic messages from e-merchants from whom one has made previous purchases, messages from social networking sites, and wiki watch lists. The name bacn is meant to convey the idea that such email is “better than spam, but not as good as a personal email”. It was originally coined in August 2007 at PodCamp Pittsburgh 2, and since then has been used amongst the blogging community.

Subscribe to get new posts in your mailbox.

Share

What Are Email Blacklists

A DNS-based Blackhole List (DNSBL) or Real-time Blackhole List (RBL) is an effort to stop email spamming. It is a “blacklist” of locations on the Internet reputed to send email spam. The locations consist of IP addresses which are most often used to publish the addresses of computers or networks linked to spamming; most mail server software can be configured to reject or flag messages which have been sent from a site listed on one or more such lists. The term “Blackhole List” is sometimes interchanged with the term “blacklist” and “blocklist”.

A DNSBL is a software mechanism, rather than a specific list or policy. There are dozens of DNSBLs in existence, which use a wide array of criteria for listing and delisting of addresses. These may include listing the addresses of zombie computers or other machines being used to send spam, ISPs who willingly host spammers, or those which have sent spam to a honeypot system.ATTOG Technologies spam stops here image

Since the creation of the first DNSBL in 1997, the operation and policies of these lists have been frequently controversial, both in Internet advocacy and occasionally in lawsuits. Many email systems operators and users consider DNSBLs a valuable tool to share information about sources of spam, but others including some prominent Internet activists have objected to them as a form of censorship. In addition, a small number of DNSBL operators have been the target of lawsuits filed by spammers seeking to have the lists shut down.

History of DNSBLs

The first DNSBL was the Real-time Blackhole List (RBL), created in 1997, at first as a BGP feed by Paul Vixie, and then as a DNSBL by Eric Ziegast as part of Vixie’s Mail Abuse Prevention System (MAPS); Dave Rand at Abovenet was its first subscriber. The very first version of the RBL was not published as a DNSBL, but rather a list of networks transmitted via BGP to routers owned by subscribers so that network operators could drop all TCP/IP traffic for machines used to send spam or host spam supporting services, such as a website. The inventor of the technique later commonly called a DNSBL was Eric Ziegast while employed at Vixie Enterprises.

The term “blackhole” refers to a networking black hole, an expression for a link on a network that drops incoming traffic instead of forwarding it normally. The intent of the RBL was that sites using it would refuse traffic from sites which supported spam — whether by actively sending spam, or in other ways. Before an address would be listed on the RBL, volunteers and MAPS staff would attempt repeatedly to contact the persons responsible for it and get its problems corrected. Such effort was considered very important before blackholing all network traffic, but it also meant that spammers and spam supporting ISPs could delay being put on the RBL for long periods while such discussions went on.

Later, the RBL was also released in a DNSBL form and Paul Vixie encouraged the authors of sendmail and other mail software to implement RBL support in their clients. These allowed the mail software to query the RBL and reject mail from listed sites on a per-mail-server basis instead of blackholing all traffic.

Soon after the advent of the RBL, others started developing their own lists with different policies. One of the first was Alan Brown’s Open Relay Behavior-modification System (ORBS). This used automated testing to discover and list mail servers running as open mail relays—exploitable by spammers to carry their spam. ORBS was controversial at the time because many people felt running an open relay was acceptable, and that scanning the Internet for open mail servers could be abusive.

In 2003, a number of DNSBLs came under denial-of-service attacks. Since no party has admitted to these attacks nor been discovered responsible, their purpose is a matter of speculation. However, many observers believe the attacks are perpetrated by spammers in order to interfere with the DNSBLs’ operation or hound them into shutting down. In August 2003, the firm Osirusoft, an operator of several DNSBLs including one based on the SPEWS data set, shut down its lists after suffering weeks of near-continuous attack.

URI DNSBLs

A URI DNSBL is a DNSBL that lists the domain names and sometimes also IP addresses which are found in the “clickable” links contained in the body of spams, but generally not found inside legitimate messages.

URI DNSBLs were created when it was determined that much spam made it past spam filters during that short time frame between the first use of a spam-sending IP address and the point where that sending IP address was first listed on major sending-IP-based DNSBLs.

In many cases, such elusive spams contain in their links domain names or IP addresses (collectively referred to as a URIs) where that URI was already spotted in previously caught spam and where that URI is not found in non-spam e-mail.

Therefore, when a spam filter extracts all URIs from a message and checks them against a URI DNSBL, then the spam can be blocked even if the sending IP for that spam has not yet been listed on any sending IP DNSBL.

Of the three major URI DNSBLs, the oldest and most popular is SURBL. After SURBL was created, some of the volunteers for SURBL started the second major URI DNSBL, URIBL. In 2008, another long-time SURBL volunteer started another URI DNSBL, ivmURI. The Spamhaus Project provides the Spamhaus Domain Block List (DBL) which they describe as domains “found in spam messages”. The DBL is intended as both a URIBL and RHSBL, to be checked against both domains in a message’s envelope and headers and domains in URLs in message bodies. Unlike other URIBLs, the DBL only lists domain names, not IP addresses, since Spamhaus provides other lists of IP addresses.

URI DNSBLs are often confused with RHSBLs (Right Hand Side BLs). But they are different. A URI DNSBL lists domain names and IPs found in the body of the message. An RHSBL lists the domain names used in the “from” or “reply-to” e-mail address. RHSBLs are of debatable effectiveness since many spams either use forged “from” addresses or use “from” addresses containing popular freemail domain names, such as @gmail.com, @yahoo.com, or @hotmail.com URI DNSBLs are more widely used than RHSBLs, are very effective, and are used by the majority of spam filters.

How a DNSBL works

To operate a DNSBL requires three things: a domain to host it under, a nameserver for that domain, and a list of addresses to publish.

It is possible to serve a DNSBL using any general-purpose DNS server software. However this is typically inefficient for zones containing large numbers of addresses, particularly DNSBLs which list entire Classless Inter-Domain Routing netblocks. For the large resource consumption when using software designed as the role of a Domain Name Server, there are role-specific software applications designed specifically for servers with a role of a DNS blacklist.ATTOG Technologies DNS blacklist diagram

The hard part of operating a DNSBL is populating it with addresses. DNSBLs intended for public use usually have specific, published policies as to what a listing means, and must be operated accordingly to attain or sustain public confidence.

DNSBL queries

When a mail server receives a connection from a client, and wishes to check that client against a DNSBL (let’s say, dnsbl.example.net), it does more or less the following:

  1. Take the client’s IP address—say, 192.168.42.23—and reverse the order of octets, yielding 23.42.168.192.
  2. Append the DNSBL’s domain name: 23.42.168.192.dnsbl.example.net.
  3. Look up this name in the DNS as a domain name (“A” record). This will return either an address, indicating that the client is listed; or an “NXDOMAIN” (“No such domain”) code, indicating that the client is not.
  4. Optionally, if the client is listed, look up the name as a text record (“TXT” record). Most DNSBLs publish information about why a client is listed as TXT records.

Looking up an address in a DNSBL is thus similar to looking it up in reverse-DNS. The differences are that a DNSBL lookup uses the “A” rather than “PTR” record type, and uses a forward domain (such as dnsbl.example.net above) rather than the special reverse domain in-addr.arpa.

There is an informal protocol for the addresses returned by DNSBL queries which match. Most DNSBLs return an address in the 127.0.0.0/8 IP loopback network. The address 127.0.0.2 indicates a generic listing. Other addresses in this block may indicate something specific about the listing—that it indicates an open relay, proxy, spammer-owned host, etc.

URI DNSBL

A URI DNSBL query (and an RHSBL query) is fairly straightforward. The domain name to query is prepended to the DNS list host as follows:

example.net.dnslist.example.com

where dnslist.example.com is the DNS list host and example.net is the queried domain. Generally if an A record is returned the name is listed.

DNSBL policies

Different DNSBLs have different policies. DNSBL policies differ from one another on three fronts:

  • Goals. What does the DNSBL seek to list? Is it a list of open-relay mail servers or open proxies—or of IP addresses known to send spam—or perhaps of IP addresses belonging to ISPs that harbor spammers?
  • Nomination. How does the DNSBL discover addresses to list? Does it use nominations submitted by users? Spam-trap addresses or honeypots?
  • Listing lifetime. How long does a listing last? Are they automatically expired, or only removed manually? What can the operator of a listed host do to have it delisted?

Varieties of DNSBLs

In addition to the different types of listed entities (IP addresses for traditional DNSBLs, host and domain names for RHSBLs, URIs for URIBLs) there is a wide range of semantic variations between lists as to what a listing means. List maintainers themselves have been divided on the issues of whether their listings should be seen as statements of objective fact or subjective opinion and on how their lists should best be used. As a result, there is no definitive taxonomy for DNSBLs. Some names defined here (e.g. “Yellow” and “NoBL” ) are varieties that are not in widespread use and so the names themselves are not in widespread use, but should be recognized by many spam control specialists.

  • White List – A listing is an affirmative indication of essentially absolute trust
  • Black List – A listing is a negative indication of essentially absolute distrust
  • Grey List – Most frequently seen as one word (greylist or greylisting) not involving DNSBLs directly, but using temporary deferral of mail from unfamiliar sources to allow for the development of a public reputation (such as DNSBL listings) or to discourage speed-focused spamming. Occasionally used to refer to actual DNSBLs on which listings denote distinct non-absolute levels and forms of trust or distrust.
  • Yellow List – A listing indicates that the source is known to produce a mixture of spam and non-spam to a degree that makes checking other DNSBLs of any sort useless.
  • NoBL List – A listing indicates that the source is believed to send no spam and should not be subjected to blacklist testing, but is not quite as trusted as a whitelisted source.

Uses of DNSBLs

  • Many MTAs like Exim, Sendmail, and Postfix can be configured to absolutely block or (less commonly) to accept email based on a DNSBL listing. This is the oldest usage form of DNSBLs. Depending on the specific MTA, there can be subtle distinctions in configuration that make list types such as Yellow and NoBL useful or pointless because of how the MTA handles multiple DNSBLs. A drawback of using the direct DNSBL support in most MTAs is that sources not on any list require checking all of the DNSBLs being used with relatively little utility to caching the negative results. In some cases this can cause a significant slowdown in mail delivery. Using White, Yellow, and NoBL lists to avoid some lookups can be used to alleviate this in some MTAs.
  • DNSBLs can be used in rule based spam analysis software like Spamassassin where each DNSBL has its own rule. Each rule has a specific positive or negative weight which is combined with other types of rules to score each message. This allows for the use of rules that act (by whatever criteria are available in the specific software) to “whitelist” mail that would otherwise be rejected due to a DNSBL listing or due to other rules. This can also have the problem of heavy DNS lookup load for no useful results, but it may not delay mail as much because scoring makes it possible for lookups to be done in parallel and asynchronously while the filter is checking the message against the other rules.
  • It is possible with some toolsets to blend the binary testing and weighted rule approaches. One way to do this is to first check white lists and accept the message if the source is on a white list, bypassing all other testing mechanisms. A technique developed by Junk Email Filter uses Yellow Lists and NoBL lists to mitigate the false positives that occur routinely when using black lists that are not carefully maintained to avoid them.
  • Some DNSBLs have been created for uses other than filtering email for spam, but rather for demonstration, informational, rhetorical, and testing control purposes. Examples include the “No False Negatives List,” “Lucky Sevens List,” “Fibonacci’s List,” various lists encoding GeoIP information, and random selection lists scaled to match coverage of another list, useful as a control for determining whether that list’s effects are distinguishable from random rejections.

Criticisms

Some end-users and organizations have concerns regarding the concept of DNSBLs or the specifics of how they are created and used. Some of the criticisms include:

  • Legitimate emails blocked along with spam from shared mailservers. When an ISP’s shared mailserver has one or more compromised machines sending spam, it can become listed on a DNSBL. End-users assigned to that same shared mailserver may find their emails blocked by receiving mailservers using such a DNSBL.
  • Lists of dynamic IP addresses. This type of DNSBL lists IP addresses submitted by ISPs as dynamic and therefore presumably unsuitable to send email directly; the end-user is supposed to use the ISP’s mailserver for all sending of email. But these lists can also accidentally include static addresses, which may be legitimately used by small-business owners or other end-users to host small email servers.
  • Lists that include “spam-support operations”, such as MAPS RBL. A spam-support operation is a site that may not directly send spam, but provides commercial services for spammers, such as hosting of Web sites that are advertised in spam. Refusal to accept mail from spam-support operations is intended as a boycott to encourage such sites to cease doing business with spammers, at the expense of inconveniencing non-spammers who use the same site as spammers.
  • Some lists have unclear listing criteria and delisting may not happen automatically nor quickly. A few DNSBL operators will request payment (e.g. uceprotect.net) or donation (e.g. SORBS).
  • Because lists have varying methods for adding IP addresses and/or URIs, it can be difficult for senders to configure their systems appropriately to avoid becoming listed on a DNSBL. For example, the UCEProtect DNSBL seems to list IP addresses merely once they have validated a recipient address or established a TCP connection, even if no spam message is ever delivered.

Despite the criticisms, few people object to the principle that mail-receiving sites should be able to reject undesired mail systematically. One person who does is John Gilmore, who deliberately operates an open mail relay. Gilmore accuses DNSBL operators of violating antitrust law.

For Joe Blow to refuse emails is legal (though it’s bad policy, akin to “shooting the messenger”). But if Joe and ten million friends all gang up to make a blacklist, they are exercising illegal monopoly power.

A number of parties, such as the Electronic Frontier Foundation and Peacefire, have raised concerns about some use of DNSBLs by ISPs. One joint statement issued by a group including EFF and Peacefire addressed “stealth blocking”, in which ISPs use DNSBLs or other spam-blocking techniques without informing their clients.

Subscribe to get new posts in your mailbox.

Share

What Is Email Spoofing

ATTOG Technologies What is SpoofingEmail spoofing is the creation of email messages with a forged sender address. It is easy to do because the core protocols do not have any mechanism for authentication. It can be accomplished from within a LAN or from an external environment using Trojan horses. Spam and phishing emails typically use such spoofing to mislead the recipient about the origin of the message.

Technical detail

When an SMTP email is sent, the initial connection provides two pieces of address information:

  • MAIL FROM: – generally presented to the recipient as the Return-path: header but not normally visible to the end user, and by default no checks are done that the sending system is authorized to send on behalf of that address.
  • RCPT TO: – specifies which email address the email is delivered to, is not normally visible to the end user but may be present in the headers as part of the “Received:” header.

Together these are sometimes referred to as the “envelope” addressing, by analogy with a traditional paper envelope.

Once the receiving mail server signals that it accepted these two items, the sending system sends the “DATA” command, and typically sends several header items, including:

  • From: Joe Q Doe <joeqdoe@example.com> – the address visible to the recipient; but again, by default no checks are done that the sending system is authorized to send on behalf of that address.
  • Reply-to: Jane Roe <Jane.Roe@example.mil> – similarly not checked

The result is that the email recipient sees the email as having come from the address in the From: header; they may sometimes be able to find the MAIL FROM address; and if they reply to the email it will go to either the address presented in the From: or Reply-to: header – but none of these addresses are typically reliable, so automated bounce messages may generate backscatter.

Use by spam and worms

Malware such as Klez and Sober and many more modern examples often search for email addresses within the computer they have infected, and use those addresses both as targets for email, but also to create credible forged From fields in the emails that they send, so that these emails are more likely to be opened. For example:

Alice is sent an infected email which she opens, running the worm code.
The worm code searches Alice’s email address book and finds the addresses of Bob and Charlie.
From Alice’s computer, the worm sends an infected email to Bob, but forged to appear to have been sent by Charlie.

In this case, even if Bob’s system detects the incoming mail as containing malware, he sees the source as being Charlie, even though it really came from Alice’s computer; meanwhile Alice remains unaware that her computer has been infected with a worm.

Legitimate use

In the early Internet, “legitimately spoofed” email was common. For example, a visiting user might use the local organization’s SMTP server to send email from the user’s foreign address. Since most servers were configured as “open relays”, this was a common practice. As spam email became an annoying problem, these sorts of “legitimate” uses fell out of favor.

When multiple software systems communicate with each other via email, spoofing may be required in order to facilitate such communication. In any scenario where an email address is set up to automatically forward incoming emails to a system which only accepts emails from the email forwarder, spoofing is required in order to facilitate this behavior. This is common between ticketing systems which communicate with other ticketing systems.

The effect on mailservers

Traditionally, mail servers could accept a mail item, then later send a Non-Delivery Report or “bounce” message if it couldn’t be delivered or had been quarantined for any reason. These would be sent to the “MAIL FROM:” aka “Return Path” address. With the massive rise in forged addresses, Best Practice is now to not generate NDRs for detected spam, viruses etc. but to reject the email during the SMTP transaction. When mail administrators fail to take this approach, their systems are guilty of sending “backscatter” emails to innocent parties – in itself a form of spam – or being used to perform “Joe job” attacks.

Identifying the source of the email

Although email spoofing is effective in forging the email address, the IP address of the computer sending the mail can generally be identified from the “Received:” lines in the email header. In many cases this is likely to be an innocent third party infected by malware that is sending the email without the owner’s knowledge.

Counter measures

SSL/TLS in mail transfer software can be used to enforce authentication, but is seldom used for this in practice. However a number of effective systems are widely used, including:

Although their use is increasing, estimates vary widely as to what percentage of emails have no form of domain authentication: from 8.6% to “almost half”, but to effectively stop forged email being delivered, receiving mail systems also need to be configured to check this authentication.

Subscribe to get new posts in your mailbox.

Share

Understanding Domain-based Message Authentication, Reporting and Conformance (DMARC)

Domain-based Message Authentication, Reporting and Conformance (DMARC) is an email validation system designed to detect and prevent email spoofing. It provides a mechanism which allows a receiving organization to check that incoming mail from a domain is authorized by that domain’s administrators and that the email (including attachments) has not been modified during transport. It is thus intended to combat certain techniques often used in phishing and email spam, such as emails with forged sender addresses that appear to originate from legitimate organizations.

DMARC is built on top of two existing mechanisms, Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM). It allows the sender of an email to publish a policy on which mechanism (DKIM, SPF or both) is employed when sending email and how the receiver should deal with failures. Additionally, it provides a reporting mechanism of actions performed under those policies. It thus coordinates the results of DKIM and SPF and specifies under which circumstances the From: header field, which is often visible to end users, should be considered legitimate.

History

A group of leading organizations came together in the spring of 2011 to collaborate on a method for combating fraudulent email at Internet-scale, based on practical experience with DKIM and SPF. They aimed to enable senders to publish easily discoverable policies on unauthenticated email – and to enable receivers to provide authentication reporting to senders to improve and monitor their authentication infrastructures.

The resulting DMARC specification was published on January 30, 2012, and within one year DMARC was estimated to protect 60% of the world’s mailboxes.

In October 2013, GNU Mailman 2.1.16 is released with options to handle posters from domain with the DMARC policy of p=reject.

In April 2014, Yahoo changed its DMARC policy to p=reject, thereby causing misbehavior in several mailing lists.

A few days later, AOL also changed its DMARC policy to p=reject.

Overview

A DMARC policy allows a sender’s domain to indicate that their emails are protected by SPF and/or DKIM, and tells a receiver what to do if neither of those authentication methods passes – such as junk or reject the message. DMARC removes guesswork from the receiver’s handling of these failed messages, limiting or eliminating the user’s exposure to potentially fraudulent & harmful messages. DMARC also provides a way for the email receiver to report back to the sender’s domain about messages that pass and/or fail DMARC evaluation.

ATTOG Technologies dmarc email auth flowchart

DMARC is designed to fit into an organization’s existing inbound email authentication process. The way it works is to help email receivers determine if the purported message aligns with what the receiver knows about the sender. If not, DMARC includes guidance on how to handle the “non-aligned” messages. DMARC doesn’t directly address whether or not an email is spam or otherwise fraudulent. Instead, DMARC requires that a message not only pass DKIM or SPF validation, but that it also pass alignment. For SPF, the message must PASS the SPF check, and the domain in the From: header must match the domain used to validate SPF (must exactly match for strict alignment, or must be a sub-domain for relaxed alignment). For DKIM, the message must be validly signed and the d= domain of the valid signature must align with the domain in the From: header (must exactly match for strict alignment, or must be a sub-domain for relaxed alignment). Under DMARC a message can fail even if it passes SPF or DKIM, but fails alignment.

DMARC policies are published in the public Domain Name System (DNS) as text (TXT) resource records (RR) and announce what an email receiver should do with non-aligned mail it receives.

To ensure the sender trusts this process and knows the impact of publishing a policy different than p=none (monitor mode), the receiver sends daily aggregate reports indicating to the sender how many emails have been received and if these emails passed SPF and/or DKIM and were aligned.

DMARC may have a positive impact on deliverability for legitimate senders, at least Google recommends the use of DMARC for bulk email senders.

Human policy

DMARC policies are published by domain owners and applied by mail receivers to the messages that don’t pass the alignment test. The domain being queried is the author domain, that is the domain to the right of @ in the From: header field. The policy can be one of none the so-called monitor mode, quarantine to treat the message with suspicion according to the receiver capabilities, or reject to reject the message outright. Reject policy is fine for domains that don’t have individual human users, or for companies with firm staff policies that all mail goes through the company mail server, and employees don’t join mailing lists and the like using company addresses, or the company provides a separate less strictly managed domain for its staff mail. Strict policies will never be appropriate for public webmail systems where the users will use their mail addresses any way one can use a mail address.

In fact, human use of a mail address may involve email forwarding from a dismissed address, and mailing lists, which are frequent causes of legitimate breakage of the original author’s domain DKIM signature and therefore DMARC alignment. Various workarounds have been proposed to cope with domains that publish strict policies unwittingly. For example, a mailing list manager should reject posts from authors who use problematic email domains. The latter behavior is the most respectful the communication protocols as well as the domain owner’s will. However, it might cause inconveniences in the face of sudden policy changes. According to John Levine, a well known mail expert, the least intrusive way to temporarily mitigate the damage would be to rewrite the From: address in a predictable, comprehensible manner, such as the following:

change
 From: John Doe <user@example.com>
to
 From: John Doe <user@example.com.INVALID>

The .INVALID top level domain is reserved for such kind of usage. In order to apply that change, before re-mailing a message, a mail agent must look up the TXT RR at _dmarc.example.com, if any, and check if it specifies a strict policy. If the change is applied, any recipient who wish to reply to the author can easily find out how to correct the address; in the same way, search engines that crawl mail archives can learn to discard the invalidating suffix. However, mail receiving systems may adversely treat an email containing an invalid domain in its key header fields.

For a more intrusive workaround, for forwarders that make changes to either the body or subject of the message, therefore invalidating the DKIM signature of the original author’s domain, the From: field can be rewritten, thereby taking ownership of the message. The original author’s address can then be added to the Reply-To: field.

Several mailing lists software now propose various options to deal with members posting from a domain with p=reject. For instance Mailman 2.1.16 (16 October 2013) and onward have such options.

Making either change may bring the message out of compliance, “The “From:” field specifies the author(s) of the message, that is, the mailbox(es) of the person(s) or system(s) responsible for the writing of the message.” Mailbox refers to the author’s email address.

Subscribe to get new posts in your mailbox.

Share

Understanding DomainKeys Identified Mail (DKIM)

Overview

DKIM provides for two distinct operations, signing and verifying. Either of them can be handled by a module of a mail transfer agent (MTA). The signing organization can be a direct handler of the message, such as the author, the submission site or a further intermediary along the transit path, or an indirect handler such as an independent service that is providing assistance to a direct handler. Signing modules insert one or more DKIM-Signature: header fields, possibly on behalf of the author organization or the originating service provider. Verifying modules typically act on behalf of the receiver organization, possibly at each hop.

The need for this type of validated identification arose because spam often has forged addresses and content. For example, a spam message may claim to be from sender@example.com, although it is not actually from that address or domain or entity, and the spammer’s goal is to convince the recipient to accept and to read the email. It is difficult for recipients to establish whether to trust or distrust any particular message or even domain, and system administrators may have to deal with complaints about spam that appears to have originated from their systems but did not. DKIM specification allows signers to choose which header fields they sign, but the From: field must always be signed. DKIM allows the signer (author organization) to communicate which emails it considers legitimate. It does not directly prevent or disclose abusive behavior. This ability to distinguish legitimate mail from potentially forged mail has benefits for recipients of e-mail as well as senders.

DKIM is independent of Simple Mail Transfer Protocol (SMTP) routing aspects in that it operates on the transported mail’s header and body, not the SMTP envelope. Hence the DKIM signature survives basic relaying across multiple MTAs.

How it works

DomainKeys_Identified_Mail_(DKIM)

The DKIM-Signature header field consists of a list of tag=value parts. Tags are short, usually only one or two letters. The most relevant ones are b for the actual digital signature of the contents (headers and body) of the mail message, bh for the body hash, d for the signing domain, and s for the selector. The default parameters for the authentication mechanism are to use SHA-256 as the cryptographic hash and RSA as the public key encryption scheme, and encode the encrypted hash using Base64.

Both header and body contribute to the signature. First, the message body is hashed, always from the beginning, possibly truncated at a given length (which may be zero). Second, selected header fields are hashed, in the order given by h. Repeated field names are matched from the bottom of the header upward, which is the order in which Received: fields are inserted in the header. A non-existing field matches the empty string, so that adding a field with that name will break the signature. The DKIM-Signature: field of the signature being created, with bh equal to the computed body hash and b equal to the empty string, is implicitly added to the second hash, albeit its name must not appear in h —if it does, it refers to another, preexisting signature. For both hashes, text id canonicalized according to the relevant c algorithms. The result is b. Algorithms, fields, and body length are meant to be chosen so as to assure unambiguous message identification while still allowing signatures to survive the unavoidable changes which are going to occur in transit. No data integrity is implied.

The receiving SMTP server uses the domain name and the selector to perform a DNS lookup. For example, given the signature

DKIM-Signature: v=1; a=rsa-sha256; d=example.net; s=brisbane;
 c=relaxed/simple; q=dns/txt; l=1234; t=1117574938; x=1118006938;
 h=from:to:subject:date:keywords:keywords;
 bh=MTIzNDU2Nzg5MDEyMzQ1Njc4OTAxMjM0NTY3ODkwMTI=;
 b=dzdVyOfAKCdLXdJOc9G2q8LoXSlEniSbav+yuU4zGeeruD00lszZ
 VoG4ZHRNiYzR

A verifier queries the TXT resource record type of brisbane._domainkey.example.net. Here, example.net is the author domain to be verified against (in the d field), brisbane is a selector given in the s field while _domainkey is a fixed part of the protocol. There are no CAs nor revocation lists involved in DKIM key management, and the selector is a straightforward method to allow signers to add and remove keys whenever they wish—long lasting signatures for archival purposes are outside DKIM’s scope. Some more tags are visible in the example:

  • v is the version,
  • a is the signing algorithm,
  • d is the domain,
  • s is the selector,
  • c is the canonicalization algorithm(s) for header and body,
  • q is the default query method,
  • l is the length of the canonicalized part of the body that has been signed,
  • t is the signature timestamp,
  • x is its expire time, and
  • h is the list of signed header fields, repeated for fields that occur multiple times.

The data returned from the query is also a list of tag-value pairs. It includes the domain’s public key, along with other key usage tokens and flags. The receiver can use this to then decrypt the hash value in the header field and at the same time recalculate the hash value for the mail message (headers and body) that was received. If the two values match, this cryptographically proves that the mail was signed by the indicated domain and has not been tampered with in transit.

Signature verification failure does not force rejection of the message. Instead, the precise reasons why the authenticity of the message could not be proven should be made available to downstream and upstream processes. Methods for doing so may include sending back an FBL message, or adding an Authentication-Results header field to the message.

Advantages

The primary advantage of this system for e-mail recipients is it allows the signing domain to reliably identify a stream of legitimate email, thereby allowing domain-based blacklists and whitelists to be more effective. This is also likely to make certain kinds of phishing attacks easier to detect.

There are some incentives for mail senders to sign outgoing e-mail:

  • It allows a great reduction in abuse desk work for DKIM-enabled domains if e-mail receivers use the DKIM system to identify forged e-mail messages claiming to be from that domain.
  • The domain owner can then focus its abuse team energies on its own users who actually are making inappropriate use of that domain.

Use with spam filtering

DKIM is a method of labeling a message, and it does not itself filter or identify spam. However, widespread use of DKIM can prevent spammers from forging the source address of their messages, a technique they commonly employ today. If spammers are forced to show a correct source domain, other filtering techniques can work more effectively. In particular, the source domain can feed into a reputation system to better identify spam. Conversely, DKIM can make it easier to identify mail that is known not to be spam and need not be filtered. If a receiving system has a whitelist of known good sending domains, either locally maintained or from third party certifiers, it can skip the filtering on signed mail from those domains, and perhaps filter the remaining mail more aggressively.

Anti-phishing

DKIM can be useful as an anti-phishing technology. Mailers in heavily phished domains can sign their mail to show that it is genuine. Recipients can take the absence of a valid signature on mail from those domains to be an indication that the mail is probably forged. The best way to determine the set of domains that merit this degree of scrutiny remains an open question. DKIM used to have an optional feature called ADSP that lets authors that sign all their mail self-identify, but it was demoted to historic status in November 2013. Instead, DMARC can be used for the same purpose and allows domains to self-publish which techniques (including SPF and DKIM) they employ, which makes it easier for the receiver to make an informed decision whether a certain mail is spam or not. Using DMARC, GMail for example rejects all emails from eBay and PayPal that are not authenticated.

Compatibility

Because it is implemented using DNS records and an added header field, DKIM is compatible with the existing e-mail infrastructure. In particular, it is transparent to existing e-mail systems that lack DKIM support.

This design approach also is compatible with other, related services, such as the S/MIME and OpenPGP content-protection standards. DKIM is compatible with the DNSSEC standard and with SPF.

Protocol overhead

DKIM requires cryptographic checksums to be generated for each message sent through a mail server, which results in computational overhead not otherwise required for e-mail delivery. This additional computational overhead is a hallmark of digital postmarks, making sending bulk spam more (computationally) expensive. This facet of DKIM may look similar to hashcash, except that the receiver side verification is not a negligible amount of work.

Weaknesses

DKIM signatures do not encompass the message envelope, which holds the return-path and message recipients. Since DKIM does not attempt to protect against mis-addressing, this does not affect its utility. A concern for any cryptographic solution would be message replay abuse, which bypasses techniques that currently limit the level of abuse from larger domains. Replay can be inferred by using per-message public keys, tracking the DNS queries for those keys and filtering out the high number of queries due to e-mail being sent to large mailing lists or malicious queries by bad actors. For a comparison of different methods also addressing this problem see e-mail authentication.

Arbitrary forwarding

As mentioned above, authentication is not the same as abuse prevention. An evil email user of a reputable domain can compose a bad message and have it DKIM-signed and sent from that domain to any mailbox from where he can retrieve it as a file, so as to obtain a signed copy of the message. Use of the l tag in signatures makes doctoring such messages even easier. The signed copy can then be forwarded to a million recipients, for example through a botnet, without control. The email provider who signed the message can block the offending user, but cannot stop the diffusion of already-signed messages. The validity of signatures in such messages can be limited by always including an expiration time tag in signatures, or by revoking a public key periodically or upon a notification of an incident. Effectiveness of the scenario can hardly be limited by filtering outgoing mail, as that implies the ability to detect if a message might potentially be useful to spammers.

Content modification

DKIM currently features two canonicalization algorithms, simple and relaxed, neither of which is MIME-aware. Mail servers can legitimately convert to a different character set, and often document this with X-MIME-Autoconverted header fields. In addition, servers in certain circumstances have to rewrite the MIME structure, thereby altering the preamble, the epilogue, and entity boundaries, any of which breaks DKIM signatures. Only plain text messages written in us-ascii, provided that MIME header fields are not signed, enjoy the robustness that end-to-end integrity requires.

Annotations by mailing lists

The problems might be exacerbated when filtering or relaying software makes changes to a message. Without specific precaution implemented by the sender, the footer addition operated by most mailing lists and many central antivirus solutions will break the DKIM signature. A possible mitigation is to sign only designated number of bytes of the message body. It is indicated by l tag in DKIM-Signature header. Anything added beyond the specified length of the message body is not taken into account while calculating DKIM signature. This won’t work for MIME messages.

Another workaround is to whitelist known forwarders, e.g. by SPF. For yet another workaround, it was proposed that forwarders verify the signature, modify the email, and then re-sign the message with a Sender: header. However, it should be noted that this solution has its risk with forwarded 3rd party signed messages received at SMTP receivers supporting the ADSP protocol. Thus, in practice, the receiving server still has to whitelist known message streams.

Subscribe to get new posts in your mailbox.

Share

Understanding Sender Policy Framework (SPF)

Principles of operation

The Simple Mail Transfer Protocol permits any computer to send email claiming to be from any source address. This is exploited by spammers who often use forged email addresses, making it more difficult to trace a message back to its source, and easy for spammers to hide their identity in order to avoid responsibility. It is also used in phishing techniques, where users can be duped into disclosing private information in response to an email purportedly sent by an organization such as a bank.

SPF allows the owner of an Internet domain to specify which computers are authorized to send mail with “from” addresses in that domain, using Domain Name System (DNS) records. Receivers verifying the SPF information in TXT records may reject messages from unauthorized sources before receiving the body of the message. Thus, the principles of operation are similar to those of DNS-based blackhole lists (DNSBL), except that SPF uses the authority delegation scheme of the Domain Name System.

The “from” address is transmitted at the beginning of the SMTP dialog. If the server rejects the domain, the unauthorized client should receive a rejection message, and if that client was a relaying message transfer agent (MTA), a bounce message to the original from address may be generated. If the server accepts the domain, and subsequently also accepts the recipients and the body of the message, it should insert a Return-Path field in the message header in order to save the from address. While the address in the Return-Path often matches other originator addresses in the mail header such as from, this is not necessarily the case, and SPF does not prevent forgery of these other addresses such as sender.

Spammers can send email with an SPF PASS result if they have an account in a domain with a sender policy, or abuse a compromised system in this domain. However, doing so makes the spammer easier to trace.

The main benefit of SPF is to the owners of e-mail addresses that are forged in the Return-Path. They receive large amounts of unsolicited error messages and other auto-replies. If such receivers use SPF to specify their legitimate source IP addresses and indicate FAIL result for all other addresses, receivers checking SPF can reject forgeries, thus reducing or eliminating the amount of backscatter.

SPF has potential advantages beyond helping identify unwanted mail. In particular, if a sender provides SPF information, then receivers can use SPF PASS results in combination with a white list to identify known reliable senders. Scenarios like compromised systems and shared sending mailers limit this use.

Reasons to implement

If a domain publishes an SPF record, spammers and phishers are less likely to forge e-mails pretending to be from that domain, because the forged e-mails are more likely to be caught in spam filters which check the SPF record. Therefore, an SPF-protected domain is less attractive to spammers and phishers. Because an SPF-protected domain is less attractive as a spoofed address, it is less likely to be blacklisted by spam filters and so ultimately the legitimate e-mail from the domain is more likely to get through.

FAIL and forwarding

SPF breaks plain message forwarding. When a domain publishes an SPF FAIL policy, legitimate messages sent to receivers forwarding their mail to third parties may be rejected and/or bounced if all of the following occur:

  • The forwarder does not rewrite the Return-Path, unlike mailing lists.
  • The next hop does not whitelist the forwarder.
  • This hop checks SPF.

This is a necessary and obvious feature of SPF – checks behind the “border” MTA (MX) of the receiver cannot work directly.

Publishers of SPF FAIL policies must accept the risk that their legitimate emails are being rejected or bounced. They should test (e.g., with a SOFTFAIL policy) until they are satisfied with the results. See below for a list of alternatives to plain message forwarding.

Caveats

Interpretation

SPF FAIL policies can be an effective but problematic tool. A typical example is a user that wishes to send an email from a private PC or a mobile phone: the user uses their corporate email address but may use a different outgoing SMTP server which is not listed in the SPF record. The corporate domain may therefore be secure by blocking all email that does not originate from themselves, but have thereby limited some of their own users. Many organizations consider this compromise acceptable and even desirable to avoid spoofing.

SPF PASS is useful for authenticating the domain for use as a parameter to a spam classification engine. That is, the domain in the sender address can be considered to be authentic if the originating IP yields an SPF PASS. The domain can then be referenced against a reputation database.

ATTOG Technologies SPF diagram image

SPF results other than PASS (used in combination with a reputation system) and FAIL cannot be meaningfully mapped to PASS and FAIL. However, a reputation system can easily track independent reputations for each SPF result, i.e. example.com:PASS and example.com:NEUTRAL would have different reputations, and ditto for the other results. This approach is useful even without whitelisting plain forwarders, because the FAIL results from the plain forwarders simply accrue an independent reputation.

The meaning of PASS, SOFTFAIL, FAIL is sometimes incorrectly interpreted to mean “not-spam”, “maybe-spam”, “spam” respectively. However SPF does nothing of the sort. SPF merely offers an organization firstly the means to classify emails based on their domain name instead of their IP address (SPF PASS); and secondly, the means to block unauthorized use of their domain (SPF FAIL).

Intra-domain forgery

In a naive implementation, SPF does not prevent a user with the same domain sending an email on behalf of another user because only the domain part of the address is used to locate the SPF policy record. In more sophisticated implementations, the domain owner can specify separate policies for each user by means of SPF “macros” that reference the “localpart” (user), or simply require all mail submissions for the domain to use SMTP AUTH. The latter is highly recommended anyway for many reasons.

Checkpoints

SPF needs to operate on the host indicated by the receiving domain’s MX record. This means the host(s) that are the direct recipient of remote TCP connections, because such a host can easily deduce the originating IP address from the TCP session. These hosts are able to block the email during the SMTP session, avoiding the necessity to generate bounce messages which could be backscatter.

Other downstream hosts, for instance in a forwarding scenario, can only perform SPF checks based on “Received” headers. This is cumbersome and error-prone. A better approach is for the MX host to check SPF without blocking any email, and then add a “Received-SPF” header field or the newer “Authentication-Results” header. Downstream hosts can then look at these trace headers and set their own policy of whether to reject, accept, or quarantine based on the SPF result and other factors.

DoS attack

An Internet draft discussed concerns related to the scale of an SPF answer leading to network exploits as a means to corrupt the DNS. The SPF project did a detailed analysis of this draft and claimed that SPF does not pose any unique threat of DNS DoS, citing example attacks using NS and MX records and identifying void DNS lookups (negative caching) as the key DNS weakness.

An SPF based attack can generate more than 40KB of traffic per message originating completely from recipient resources once a spam campaign containing MAILFROM with unique local-parts exceed the recipient’s negative caching limits commonly imposed. Often Windows based services impose rather low limits and other resolver offerings also permit low negative caching limits to mitigate access problems following service disruptions. SPF includes the “l” macro able to combine components of the email-address local-parts to construct unique DNS requests generated by recipient resources. The SPF result for “jo@example.com” may not be the same as that for “lu@example.com” which always requires repeating potentially long sequences of more than 100 DNS transactions based upon the same cached SPF record.

SPF provides several advantages for malefactors:

  • SPF based attacks reflect off recipient resources which obfuscates likely compromised systems initiating the abuse where Authentication-Results omit the authorized IP addresses.
  • SPF based attacks, even without short negative caching, offers a means to obtain significant network amplification.
  • SPF based attacks are seldom logged.
  • When based upon the MAILFROM, the common case, authentication is not achieved so referenced domain’s involvement may not extend beyond mere authorization.

Relationship with DKIM

SPF validates the message envelope (the SMTP bounce address), not the message contents (header and body) – this is the distinction between SMTP and Internet Message Format. It is complementary to DomainKeys Identified Mail (DKIM), which signs the contents (including headers).

In brief, SPF validates MAIL FROM vs. its source server; DKIM validates the “From:” message header and a mail body by cryptographic means.

Subscribe to get new posts in your mailbox.

Share

Phishing

Quick Facts

Phishing is a scam where Internet fraudsters send spam or pop-up messages to lure unsuspecting victims into providing passphrases, personal, and/or financial information. To avoid getting hooked:

  • Realize that no one should ask for your passphrase.
  • Don’t reply to email or pop-up messages that ask for passphrases, personal, or financial information, and do NOT click on links in such messages. Don’t cut and paste a link from the message into your Web browser — phishers can make links look like they go one place, but that actually send you to a different site.
  • Some scammers send an email that appears to be from a legitimate business and ask you to call a phone number to update your account or access a “refund.” Because they use Voice over Internet Protocol technology, the area code you call does not reflect where the scammers really are. If you need to reach an organization you do business with, call the number on your financial statements or on the back of your credit card.
  • Use anti-virus and anti-spyware software, as well as a firewall, and update them all regularly.
  • Don’t email passphrases, personal, or financial information.
  • Review credit card and bank account statements as soon as you receive them to check for unauthorized charges.
  • Be cautious about opening any attachment or downloading any files from emails you receive, regardless of who sent them.
  • Forward phishing emails to spam@uce.gov – and to the company, bank, or organization impersonated in the phishing email. You also may report phishing email to reportphishing@antiphishing.org. The Anti-Phishing Working Group, a consortium of ISPs, security vendors, financial institutions and law enforcement agencies, uses these reports to fight phishing.
  • If you’ve been scammed, visit the Federal Trade Commission’s Identity Theft website at www.consumer.ftc.gov/features/feature-0014-identity-theft.

For general information about phishing, see: What are phishing scams and how can I avoid them?

How not to get hooked by phishing scams

“We suspect an unauthorized transaction on your account. To ensure that your account is not compromised, please click the link below and confirm your identity.”

“During our regular verification of accounts, we couldn’t verify your information. Please click here to update and verify your information.”

“Your e-mail (or passphrase) will expire soon. To avoid any interruption please click the link below and upgrade your email.”

Have you received email with a similar message? It’s a scam called “phishing” — and it involves Internet fraudsters who send spam or pop-up messages to lure personal information (credit card numbers, bank account information, Social Security number, passwords, or other sensitive information) from unsuspecting victims.

According to OnGuard Online, phishers send an email or pop-up message that claims to be from a business or organization that you may deal with — for example, an Internet service provider (ISP), bank, online payment service, or even a government agency. The message may ask you to “update,” “validate,” or “confirm” your account information. Some phishing emails threaten a dire consequence if you don’t respond. The messages direct you to a website that looks just like a legitimate organization’s site. But it isn’t. It’s a bogus site whose sole purpose is to trick you into divulging your personal information so the operators can steal your identity and run up bills or commit crimes in your name.

We suggest these tips to help you avoid getting hooked by a phishing scam:

Don’t reply
If you get an email or pop-up message that asks for personal or financial information, do not reply. And don’t click on the link in the message, either. Legitimate companies don’t ask for this information via email. If you are concerned about your account, contact the organization mentioned in the email using a telephone number you know to be genuine, or open a new Internet browser session and type in the company’s correct Web address yourself. In any case, don’t cut and paste the link from the message into your Internet browser — phishers can make links look like they go to one place, but that actually send you to a different site.
Area codes can mislead
Some scammers send emails that appear to be from a legitimate business and ask you to call a phone number to update your account or access a “refund.” Because they use Voice over Internet Protocol technology, the area code you call does not reflect where the scammers really are. If you need to reach an organization you do business with, call the number on your financial statements or on the back of your credit card. And delete any emails that ask you to confirm or divulge your financial information.
Use anti-virus and anti-spyware software, as well as a firewall, and update them all regularly
Some phishing emails contain software that can harm your computer or track your activities on the Internet without your knowledge.Anti-virus software and a firewall can protect you from inadvertently accepting such unwanted files. Anti-virus software scans incoming communications for troublesome files. Look for anti-virus software that recognizes current viruses as well as older ones; that can effectively reverse the damage; and that updates automatically.A firewall helps make you invisible on the Internet and blocks all communications from unauthorized sources. It’s especially important to run a firewall if you have a broadband connection. Operating systems (like Windows or Linux) or browsers (like Internet Explorer or Netscape) also may offer free software “patches” to close holes in the system that hackers or phishers could exploit.
Don’t email personal or financial information.
Email is not a secure method of transmitting personal information. If you initiate a transaction and want to provide your personal or financial information through an organization’s website, look for indicators that the site is secure, like a lock icon on the browser’s status bar or a URL for a website that begins “https:” (the “s” stands for “secure”). Unfortunately, no indicator is foolproof; some phishers have forged security icons.
Review credit card and bank account statements to check for unauthorized charges
If your statement is late by more than a couple of days, call your credit card company or bank to confirm your billing address and account balances.
Be cautious of attachments and downloads
Be cautious about opening any attachment or downloading any files from emails you receive, regardless of who sent them. These files can contain viruses or other software that can weaken your computer’s security.
Forward phishing emails to spam@uce.gov
You can also forward emails to the company, bank, or organization impersonated in the phishing email — especially if it’s particularly realistic. Most organizations have information on their websites about where to report problems. You also may report phishing email to reportphishing@antiphishing.org. The Anti-Phishing Working Group, a consortium of ISPs, security vendors, financial institutions and law enforcement agencies, uses these reports to fight phishing.
File a complaint
If you believe you’ve been scammed, file your complaint at ftc.gov, and then visit the FTC’s Identity Theft website at ftc.gov/idtheft. Victims of phishing can become victims of identity theft. While you can’t entirely control whether you will become a victim of identity theft, you can take some steps to minimize your risk. If an identity thief is opening credit accounts in your name, these new accounts are likely to show up on your credit report. You may catch an incident early if you order a free copy of your credit report periodically from any of the three major credit reporting companies. See www.annualcreditreport.com for details on ordering a free annual credit report.

I’ve been phished! What should I do?

This depends — mostly on how much information you accidentally provided to the phishers.

In addition to reporting the phishing scam, this guide should help:

I accidentally sent…You should…
My email/username & password/passphraseChange your password/passphrase immediately.If you’re using a free provider (Gmail, Hotmail, etc) and you find an increasingly and uncontrollable amount of spam, you may wish to change your email address as well.
Personal information, such as:

  • Address
  • Bank/financial account number
  • Credit Card number or information
  • Answers to security questions
  • Other personal information that can be changed
  • Driver’s license / license plate
While there’s no way to “unsend” the email, many of these pieces of information are changeable (especially credit card numbers). Contact the appropriate organization or financial institution. You should also report this as identity theft.Please note: the theft of a credit card (or credit card number) alone does not constitute identity theft (as determined by the FTC). You should, however, promptly call the financial institution and have the number changed. You can also work out any erroneous charges on your account.Also, technically, yes — your address is changeable, if you move. However, consider that only as a last resort; most identity thieves attempt to collect thousands (even millions) of individuals’ information during phishing scams; they’re likely not singling you out as a target. If you feel your personal safety threatened, contact your local police department.
Personal information that isn’t changeable — such as:

  • Social Security number
  • Mother’s maiden name
  • Date &/or city of birth
  • Health/medical information
Unfortunately, there’s not much you can do about this except defend yourself (electronically). Being proactive and staying alert/aware of your credit is your best defense.

How to Report a phishing scam

Forward spam that is phishing for information to spam@uce.gov – and to the company, bank, or organization impersonated in the phishing email. Most organizations have information on their websites about where to report problems.

If you believe you’ve been scammed, file your complaint with the FTC, and then visit the FTC’s Identity Theft website at www.consumer.ftc.gov/features/feature-0014-identity-theft. Victims of phishing can become victims of identity theft.

You also may report phishing email to reportphishing@antiphishing.org. The Anti-Phishing Working Group, a consortium of ISPs, security vendors, financial institutions and law enforcement agencies, uses these reports to fight phishing.

Other types of phishing

IVR or phone phishing
This criminal technique uses a rogue (IVR) system to recreate a legitimate sounding copy of a bank or other institution’s IVR system. The victim is prompted (typically via a phishing e-mail) to call in to the “bank” via a (ideally toll free) number provided in order to “verify” information. A typical system will reject log-ins continually, ensuring the victim enters PINs or passwords multiple times, often disclosing several different passwords. More advanced systems transfer the victim to the attacker posing as a customer service agent for further questioning.A criminal could even record the typical commands (“Press one to change your password, press two to speak to customer service” …) and play back the direction manually in real time, giving the appearance of being an IVR without the expense.
Quid pro quo
Quid pro quo means something for something:

  • An attacker calls random numbers at a company claiming to be calling back from technical support. Eventually they will hit someone with a legitimate problem, grateful that someone is calling back to help them. The attacker will “help” solve the problem and in the process have the user type commands that give the attacker access or launch malware
  • In a 2003 information security survey, 90% of office workers gave researchers what they claimed was their password in answer to a survey question in exchange for a cheap pen. Similar surveys in later years obtained similar results using chocolates and other cheap lures, although they made no attempt to validate the passwords

Subscribe to get new posts in your mailbox.

Share

Social Engineering

Don’t get tricked out of your information

In computer security, social engineering is a term that describes a non-technical kind of intrusion that relies heavily on human interaction and often involves tricking or manipulating other people to divulge confidential information or break normal security procedures.

A social engineer runs what used to be called a “con game”. For example, a person using social engineering to break into a computer network would try to gain the confidence of someone who is authorized to access the network in order to get them to reveal information that compromises the network’s security. They might call the authorized employee with some kind of urgent problem; social engineers often rely on the natural helpfulness of people as well as on their weaknesses. Appeal to vanity, appeal to authority, and old-fashioned eavesdropping are typical social engineering techniques.

Another aspect of social engineering relies on people’s inability to keep up with a culture that relies heavily on information technology. Social engineers rely on the fact that people are not aware of the value of the information they possess and are careless about protecting it. Frequently, social engineers will search dumpsters for valuable information, memorize access codes by looking over someone’s shoulder (shoulder surfing), or take advantage of people’s natural inclination to choose passwords that are meaningful to them but can be easily guessed. Security experts propose that as our culture becomes more dependent on information, social engineering will remain the greatest threat to any security system. Prevention includes educating people about the value of information, training them to protect it, and increasing people’s awareness of how social engineers operate.

All social engineering techniques are based on specific attributes of human decision-making known as cognitive biases. These biases, sometimes called “bugs in the human hardware,” are exploited in various combinations to create attack techniques, some of which are listed here.

Pretexting

Pretexting is the act of creating and using an invented scenario (the pretext) to persuade a targeted victim to release information or perform an action and is typically done over the telephone. It’s more than a simple lie as it most often involves some prior research or set up and the use of pieces of known information (e.g. for impersonation: date of birth, Social Security Number, last bill amount) to establish legitimacy in the mind of the target.

This technique is often used to trick a business into disclosing customer information, and is used by private investigators to obtain telephone records, utility records, banking records and other information directly from junior company service representatives. The information can then be used to establish even greater legitimacy under tougher questioning with a manager (e.g., to make account changes, get specific balances, etc).

As most U.S. companies still authenticate a client by asking only for a Social Security Number, date of birth, or mother’s maiden name, the method is effective in many criminal situations and will likely continue to be a security problem in the future.

Pretexting can also be used to impersonate co-workers, police, bank, tax authorities, or insurance investigators — or any other individual who could have perceived authority or right-to-know in the mind of the targeted victim. The pretexter must simply prepare answers to questions that might be asked by the victim. In some cases all that is needed is a voice that sounds authoritative, an earnest tone, and an ability to think on one’s feet.

For more information about pretexting, visit:
http://www.consumer.ftc.gov/articles/0272-how-keep-your-personal-information-secure

Phishing

Phishing is a scam where Internet fraudsters send spam or pop-up messages to lure personal and financial information from unsuspecting victims.

For more information about phishing, see our topic about phishing.

Subscribe to get new posts in your mailbox.

Share

What Are Phishing Scams And How Can I Avoid Them?

On this page:

  • Phishing explained
  • Specific types of phishing
  • Avoiding phishing scams
  • Warnings
  • Reporting phishing attempts
  • Example of a phishing scam

Phishing explained

Phishing scams are typically fraudulent email messages appearing to come from legitimate enterprises (e.g., your university, your Internet service provider, your bank). These messages usually direct you to a spoofed web site or otherwise get you to divulge private information (e.g., password, credit card, or other account updates). The perpetrators then use this private information to commit identity theft.

One type of phishing attempt is an email message stating that you are receiving it due to fraudulent activity on your account, and asking you to “click here” to verify your information.

Phishing scams are crude social engineering tools designed to induce panic in the reader. These scams attempt to trick recipients into responding or clicking immediately, by claiming they will lose something (e.g., email, bank account). Such a claim is always indicative of a phishing scam, as responsible companies and organizations will never take these types of actions via email.

Specific types of phishing

Phishing scams vary widely in terms of their complexity, the quality of the forgery, and the attacker’s objective. Several distinct types of phishing have emerged.

Spear phishing

Phishing attacks directed at specific individuals, roles, or organizations are referred to as “spear phishing”. Since these attacks are so pointed, attackers may go to great lengths to gather specific personal or institutional information in the hope of making the attack more believable and increasing the likelihood of its success.

The best defense against spear phishing is to carefully, securely discard information (i.e., using a cross-cut shredder) that could be used in such an attack. Further, be aware of data that may be relatively easily obtainable (e.g., your title at work, your favorite places, or where you bank), and think before acting on seemingly random requests via email or phone.

Whaling

The term “whaling” is used to describe phishing attacks (usually spear phishing) directed specifically at executive officers or other high-profile targets within a business, government, or other organization.

Avoiding phishing scams

Reputable organizations will never use email to request that you reply with your password, Social Security number, or confidential personal information. Be suspicious of any email message that asks you to enter or verify personal information, through a web site or by replying to the message itself. Never reply to or click the links in a message. If you think the message may be legitimate, go directly to the company’s web site (i.e., type the real URL into your browser) or contact the company to see if you really do need to take the action described in the email message.

When you recognize a phishing message, delete the email message from your Inbox, and then empty it from the deleted items folder to avoid accidentally accessing the web sites it points to.

Always read your email as plain text.

For help, see Microsoft Support.

Phishing messages often contain clickable images that look legitimate; by reading messages in plain text, you can see the URLs that any images point to. Additionally, when you allow your mail client to read HTML or other non-text-only formatting, attackers can take advantage of your mail client’s ability to execute code, which leaves your computer vulnerable to viruses, worms, and Trojans.

Warnings

Reading email as plain text is a general best practice that, while avoiding some phishing attempts, won’t avoid them all. Some legitimate sites use redirect scripts that don’t check the redirects. Consequently, phishing perpetrators can use these scripts to redirect from legitimate sites to their fake sites.

Another tactic is to use a homograph attack, which, due to International Domain Name (IDN) support in modern browsers, allows attackers to use different language character sets to produce URLs that look remarkably like the authentic ones. See Don’t Trust Your Eyes or URLs.

Reporting phishing attempts

For more about phishing scams, see Phishing.

Subscribe to get new posts in your mailbox.

Share

Encryption Explained

From WikiPedia: encryption is the process of transforming information (referred to as plaintext) using an algorithm (called a cipher) to make it unreadable to anyone except those possessing special knowledge, usually referred to as a key.

While the process of encrypting information is nothing new, encryption technologies are a hot topic in IT recently — with good reason. This article hopes to explain the various types of encryption as used regularly by IT pros.

At rest vs. in transit

Data can be encrypted two ways: at rest and in transit.

At rest

Refers to data storage — either in a database, on a disk, or on some other form of media.

Examples of at rest encryption

In transit

Refers to data which is encrypted as it traverses a network — including via web applications, smart phone apps, chats, etc. In-transit basically refers to the point at which the data leaves the storage drive or database until it’s re-saved or delivered to its destination. Protecting information in transit essentially ensures protection from others attempting to snoop or eavesdrop on information as it traverses the network.

Examples of in transit encryption

Please note: employing these two types of encryption safeguards must occur in tandem; it’s not automatic. Data encrypted at rest does not guarantee it remains encrypted as it traverses a network. Conversely, data encrypted “over the wire” does not offer any safeguard that the content remains encrypted after it has reached its destination.

Encryption methods and protocols

The actual process and algorithms by which encryption technologies and software use differ. The current standard specification for encrypting electronic data is the Advanced Encryption Standard (AES). Almost all known attacks against AES’ underlying algorithm are computationally infeasible — in part due to lengthier key sizes (128, 192, or 256 bits). If this argument sounds familiar, see: Passwords and Passphrases.

Symmetric vs. asymmetric key algorithms

Symmetric key algorithms use related, often identical keys to both encrypt and then decrypt information. In practice, this is known mostly as a shared secret — between two or more parties.

Asymmetric key algorithms, however, use different keys to encrypt and decrypt information; one key encrypts (or locks) while the other decrypts (or unlocks). In practice, this is known mostly as a public/private key; the public key can be shared openly, the private key should not. In most cryptographic systems, it is extremely difficult to determine the private key values based on the public key.

How this encryption works

Using public/private keys, the lock/unlock algorithm can go two ways. Alice can encrypt some bit of information with Bob’s public key, and then send it to Bob. Only the holder of Bob’s private key should be able to decrypt and read the message. Conversely, Alice could encrypt some bit of information with her own private key — and while anyone else in the world could read the message, they would have to use Alice’s public key to do so, meaning that the message must have come from Alice.

Common technologies that rely on public key cryptography include TLS/SSL and PGP.

Read more about public key cryptography.

Subscribe to get new posts in your mailbox.

Share