An important part of having a website is ensuring that it is kept secure. In the past the need of security wasn’t so much of an issue, yet the use of hacker bots to look for site and database vulnerabilities has become very prevalent in the last couple of years. It is important to understand that even a static website (one that doesn’t use a database) is potentially vulnerable, though CMS (content management System) websites are the most commonly targeted with exploits through SQL injection.
Static Websites
A Static website will always be the more secure option since it doesn’t use a database to store content (though individual script inclusions may use one), yet a static websites can still be compromised through insecure FTP settings, or through the hosting account mail server; in this regard spammer bots will commonly send out emails with an executable link or file that “upon clicking” will infect the mail server, which in turn will allow the hacker script to upload Trojans that will infect the file structure.
The purpose of the hacker access can be to spam the internet using your mail-server, to inject code into pages, to create links from your domain to spam pages, or even to remotely control your site to target others. It is a good practice to periodically to do a site name and URL search for your business at Google and ensure that all of the search results shown are legitimate. If your site has been compromised it is likely that below your website’s name there will be a “This site may be unsafe” notice of some type. If you click the link provided by Google it will give you a general idea of how they believe your site was compromised.
If your static site is compromised there are a few steps I typically take to solve the issue. In short, I download the website files and scan them for vulnerabilities. I also personally look over their file names to see if any unusual ones are in place (and the source code for PHP pages); hacker file names will often mimic what one would find in a CMS website, or its name will be a variation upon other common file names. If you know what to look for they are easily identifiable. It is also helpful to look at file changes by date by FTP.
I then login to the hosting server and delete the email accounts, and lock down the mail server. I’ve noted this elsewhere, but it is always best to use a webmail option, such as GMail or GoDaddy’s mail server instead of one’s hosting account mail server. Hosting account mail servers, and one’s personal computer’s “Outlook” mail application will not have the spam and security protocols in place to detect and disinfect as well as a good webmail server, and if they do get infected then you will likely infect your hosting account and your personal computer.
If the hosting account’s email server is infected (nearly impossible to detect and clean since the core files are system protected so not directly accessible), it is best to create a new hosting account (an easy fix when there isn’t a database). I then inspect and fully secure the .htaccess file, check that the file permissions are secure from writing, etc. Once this is done a Google Webmaster Tools account is created, and if there are hacker links on the web I will submit a removal request for every one of them in Webmaster Tools (can be a long process); the site is then tested at Webmaster Tools to see if it is now “hacker safe”, and I then contact Google to review the site. In most cases you’ll get the “all good”, and the hacker notice will be gone in a couple of days.
It is important to note that on shared hosting servers the IP addresses are assigned per an IP range of that server. In this regard your IP address may get listed on a blacklist alert since another website who uses the same IP as your own is infected. If this happens the email you send through your hosting accounts email application will likely be “blocked” by the recipient’s ISP mail-server even though your own site is secured and not infected. If you find your emails are not being received you might want to check the Blacklist Alert site:
http://www.blacklistalert.org/
IP address or domain name
https://lookup.uribl.com/
CMS (Content Management System) Websites
A CMS site does use a database to store content, and since it is a dynamic platform it is designed to be ‘writable’ which of course leaves it potentially open to outside vulnerabilities. It is important to note that once someone gains access to the database they can manually create user records to gain full administrative access. If you have a CMS such as WordPress, Joomla, or Drupal then you will have noted the regular platform version, plugin, and component updates. In addition to feature set improvements these updates will typically include security upgrades so it is very important that a CMS site is regularly maintained. At a minimum it is important to regularly backup of website files and database so you can recover a site if it is ever compromised.
To help protect the CMS sites I develop I now include a Firewall on all client CMS websites. If you’re one of my WordPress clients you will note that the Firewall I’ve installed displays a list on the dashboard of the top 5 IP’s blocked, top 5 countries blocked, and top 5 failed logins over the last month. This is what I look at first when I login to a website to perform updates, and to review the site security. For client sites that show any evidence of hacker access I will also include additional “advanced” security to the clients who I provide security updates for.
When it comes to securing a CMS website there are a lot of considerations. The first thing to understand is that a good Firewall (and other security applications of this type) uses an API key to run scans on a website per their database of known issues which looks for vulnerabilities. This is in effect the same thing one’s personal computer does when your personal virus-scan/security looks for Trojans and other malicious scripts – most will use HackRepair.com’s blacklist for blocking/banning of malicious User_Agents.
With Firewalls, some of the features I implement include: Login security, automatic scheduled scans, IP Address blocking, and lost password and login access notifications.
I routinely scan and check for:
- HeartBleed vulnerability
- Core files against repository versions for changes
- Theme files against repository versions for changes
- Plugin files against repository versions for changes
- Known malicious files
- Backdoor file and database access, Trojans and suspicious code
- Dangerous URLs and suspicious content in posts
- Dangerous URLs and suspicious content in comments
- Out of date plugins, themes and platform versions
- Strength of passwords
- Unauthorized DNS changes
- Files vulnerabilities outside of CMS installation
- Scan images and binary files as if they were executable
While security is important it is also important that the Firewall rules are set to ensure that regular “real” users can make mistakes, retrieve passwords, etc, yet “bots” get summarily blocked by exceeding common page view, crawler, and 404 (file not found) error rates.
To protect CMS sites I hide the CMS platform version and have the settings set to:
- Immediately lock out invalid usernames
- Prevent the revealing of valid users in login errors
- Prevent users registering ‘admin’ username
- Prevent discovery of usernames through ‘/?author=N’ scans
- Block IP’s who send POST requests with blank User-Agent and Referer
- Hold anonymous comments using member emails for moderation
- Filter comments for malware and phishing URL’s
- Check password strength on profiles
- Disable Code Execution for Uploads directory
- Enable SSL Verification for scanning
- There are additional blocking options available which include blocks of an IP address range, Hostname, User-Agent (browser), and the referrer (where website visitor arrived from). I also regularly add to an IP block list for all my CMS clients. In this case just be sure to whitelist your own IP to ensure that you don’t lock yourself out.
Fully Securing the CMS
The following are the measures I take with CMS sites that I manage which have shown any risk of hacker intrusion.
The first thing to recognize that is that most CMS applications will use “admin” as the username of the administrator by default; it is important to rename it to something that is won’t be easily discovered (typically the user login name is different from their display name); a user ID of “1” should also be changed to something else since this is default for administrators. Next you want to enforce strong passwords for all users by utilizing auto-generated ones, the dashboard should be hidden, and the security menu should be hidden within the admin bar.
Other considerations include:
- Ensuring that the front page of the site uses a safe version of jQuery (not long back the Revolution slider’s JQuery had a vulnerability that allowed hacker access).
- Implementing of away mode to disable access for a specific period of time.
- Change the content directory’s name (bots use scripts that search “known” paths for exploits, so changing a folder/directory name can be very effective).
- Protecting common files from access by securing the folder/file permissions (It is especially important to ensure that the configuration file and the .htaccess file are not writeable).
- Database table prefix is not a default one, such as “wp” for WordPress.
- Periodic changed to configuration file Salts used for encryption.
- Disabling of directory browsing.
- Blocking of HTTP request methods you do not need.
- Blocking of non-english characters in the URL.
- Users cannot edit plugin and themes files directly from within the administration area.
- Login page is not giving out unnecessary information upon failed login.
Both Firewalls and site security applications will protect the login area from brute force attacks with an API protection key that scans for known vulnerabilities exploited by bots. Some of the common security measures include:
- Detecting changes in file names
- Blocking of suspicious looking information in the URL.
- Not allowing users without a user agent to post comments.
- 404 detection is enabled (looks at a user hitting a large number of non-existent pages, and blocks site access per an error threshold).
- Blocking XML-RPC requests with multiple authentication attempts.
- Users cannot execute PHP from the uploads folder.
- User profiles are not publicly available.
- Blocking known bad hosts and agents with the ban users tool.
- Installation denial of long URLs.
- Installation is not publishing the Windows Live Writer header or the Really Simple Discovery (RSD) header.