Category Archives: General

SEO Indexing Services and Why You Don’t Need Them

SEO indexing is the idea of getting a new site indexed, or included within Google as quickly as possible. However, there are companies who prey on people not in the know, and sell seo indexing as a much bigger service. In reality getting your site indexed is something you can do yourself, if you have a spare 15 minutes. Worse still, some companies are changing hundreds of pounds for seo indexing – it makes my eyes water.

Continue reading

ECommerce Content Source Ordering for Product Detail Pages

Content source ordering or SOC (source ordered content) is the idea that content nearer the top of the raw  HTML source code has greater weight and meaning for search engines. For instance, a paragraph of text right at the top of the HTML source has more meaning than the same passage that may appear in the footer. It is very useful for those all too common generic menus (home, about, contact etc.) that has no SEO benefit at all, yet appears at the top of every page of your site. With SOC and absolute positioning of DOM elements, it is possible to position this HTML at the very bottom of the source code, thus gicing greater weight to your page content.

The latter is not a new idea by any means, but is generally considored to be a positive practice to implement on any site.

Continue reading

Passing Arguments as an Associative Array to a Function

Sometimes is it useful and slightly cleaner to pass arguements to a function via an associative array as opposed to a list of variables. There are a variety of methods to achieve this, such as using PHP’s extract function – but the below is cleaner in my opinion. Please note, the following functionality is common place when using a good MVC framework, or a good CodeIgniter base class. Take a sample PHP class:

class setOptions {

    var $temp = '/default_temp/';
    var $upload = '/default_upload/';
    var $cache_time = '0';
    var $cache_status = '0';
    var $cache_file = 'users.txt';

    function __construct($user_settings)
    {
        if (is_array($user_settings)) {
            foreach($user_settings as $k=>$v)  {
                $this->assignSetting($k, $v);
            }
        } else {
            die('Config Error!');
        }
    }

    function assignSetting($name, $value)
    {
        $whitelist = array('temp', 'upload', 'cache_time', 'cache_status', 'cache_file');

        if (in_array($name, $whitelist)) {
            $property = $name;
            $this->$property = $value;
        }
    }

}

In lines 3 – 6 we set the default values of our settings that are used an our class. We can choose to leave these as they are, or pass the class an array of new setting, to overwrite them.

In line 8 the settings array, as an associative array, is passed to the magic construct function so all the settings are available when the class is called. On line 10, we check to ensure that the data passed to the function is actually an array and on line 11 we simply through each key/value of the array.

On each iteration, the assignSetting function is called (line 17). The function takes a setting name and value as it’s arguements. On line 19 a whitelist of allowed settings is created as an array. Line 20 checks to ensure the setting we are attempting to add is within this whitelist.

Continue reading

Detect AJAX Requests using the x-requested-with header and xmlhttprequest

This is a small snippet of code I came across today, it allows a script to display different content based on how it was requested. This method allows your scripts to remain in a single file, handing both AJAX and normal requests – it avoids ending up with lots of small PHP files in your AJAX folder, that deals with ajax requests.Another use would be a page that has 2 web forms, one AJAX and one normal. You could keep the code for this page in a single file. This method is also useful for security purposes, as it would ensure that requests to your AJAX scripts are via AJAX only. It also has uses for writing unobtrusive JavaScript – maybe ensuring that an AJAX enabled web form would work when javascript is disbaled.

For example, the below code would display different code depending on if the request for the page was made via AJAX or directly via a browser.

if(isset($_SERVER['HTTP_X_REQUESTED_WITH']) && strtolower($_SERVER['HTTP_X_REQUESTED_WITH']) == 'xmlhttprequest') {
    //This is an AJAX request, do AJAX specific stuff
}
else {
    //This is not an AJAX request E.g. display normal page content.
}

In some code I was working on today, I saw a neater way of achieving the above, this would be included in your common config file:

define('IS_AJAX_REQUEST', isset($_SERVER['HTTP_X_REQUESTED_WITH']) && strtolower($_SERVER['HTTP_X_REQUESTED_WITH']) == 'xmlhttprequest');

if(IS_AJAX_REQUEST) {
    //This is an AJAX request, do AJAX specific stuff
}
else {
    //This is not an AJAX request E.g. display normal page content.
}

There’s an HTTP variable set called HTTP_X_REQUESTED_WITH, which will is set to ‘XMLHttpRequest‘ if the request method is via AJAX. This is method is untested with JavaScript frameworks other than JQuery, so may not work (but I don’t see any reason at all why it wouldn’t!).

It’s also worth noting that not all web servers include this variable and sometimes omit this specific $_SERVER paramter. Use vardump($_SERVER) to check that the HTTP_X_REQUESTED_WITH is present.

Prevent Duplicate Content using the Canonical Url Tag

I was recently doing an seo audit of a small ecommerce website. One of the first things I did was to do a ‘site:www.domain.com’ in Google. Amazingly, the site in question has approximately 8200 pages indexed in Google. This was quite surprising when the store only sold less than 1500 unique products. The site used a horrible ecommerce module bolted onto phpnuke and has a horrible url structure, appending lots on unecessary querystring data onto the url.

Whilst looking through the Google results for this site the majority of pages were as follows:

/index.php?tab=123&txtSearch=ALL&List=oasc&Sort=PName%2CPName&CreatedUserID=1&pageindex=40&Language=en-GB

The site has an advanced search page, whereby you can sort products using a variety of options such as ascending order, descending order, size, price etc. This is bad for a number reasons, but mainly due to duplicate content (not to mention lower serps ranking, traffic loss and decreased page relevancy) . A page of results in ascending and descending order is essentially the same page, simply a different view of your data – you can help search engines via using the relatively new canonical url link tag.

To illustrate I’ll use an example of a typical category page, whereby you can sort a list of products in ascending and descending order, leaving you with a number of urls as follows:


http://www.shop.com/category.php?catName=Shirts&sortOrder=ASC

In this example the part of the querystring creating the duplicate content would be the sortOrder parameter – as you would want your seperate categories indexed.

The solution is quite simple. In your head tag add the following:

<link rel="canonical" href="http://www.shop.com/category.php?catName=Shirts" />

By adding this to your category page you are telling search engines (currently Google, Yahoo, Ask and Bing use this tag) that this page is a copy of http://www.shop.com/category.php?catName=Shirts. Indicators such as Google Pagerank are also transferred to your preferred url.

The canonical url tag has many uses and can be used to help with the following issues:

  • Pages that contain session IDs appended to the querystring
  • Search results pages that append search data to the querystring
  • Print versions of page
  • Duplicate content for www. and non-www. pages 0 in your canonical tag you would include your preferred url
  • Same content contained in multiple categories – E.g. a product contained in multiple categories on an online store
  • Removing affiliate ids in the url
  • Preventing multiple pages of a discussion topic with comments from being indexed E.g. shop.com/post.php?id=123&page=1

You can read more about the canonical tag at the official Google Webmaster Blog. Matt Cutt’s also has a 20 minute video explaing the canonical tag in more depth.

The main point to consider is that the canonical tag is simply a hint and not a directive. It is another method to give search engines help in indexing your content. This is very useful when working on existing sites already indexed by Google. However, on new sites bit more planning can help. For instance, in a  previous article I covered 301 redirects for seo using htaccess – how to set a prefferred version of your site via htaccess. On an ecommerce store you could avoid appending search data to the querystring.

EDIT: wordpress and all in one seo plugin generate canonical link tags for blog posts. For example, comments are seperated into multiple pages E.g


http://www.web-design-talk.co.uk/157/how-to-deal-with-difficult-clients-using-split-testing/comment-page-1/#comment-344

With the actual content being at:


http://www.web-design-talk.co.uk/157/how-to-deal-with-difficult-clients-using-split-testing

If you have a quick look at the source code to the comments page you’ll see the following has been added:

<pre id="line34"><link rel="canonical" href="http://www.web-design-talk.co.uk/157/how-to-deal-with-difficult-clients-using-split-testing/" />

How to Deal With Difficult Clients Using Split Testing

Sometimes you can be in the process of trying to tell a client that their idea simply won’t work. Be it a flimsey campaign idea of extra design element that you know through experience will not work and produce the desired KPIs for a client project. You can even show the client links, articles and examples of why their idea will fail to deliver results. However, if this is potential or existing client they are likely to go elsewhere, to a company willing to follow their every word without consideration – I have come across web companies who will do this.

Recently an existing client came to me asking why his site isn’t showing up when people search for a particular long tail search term. Now, his existing site used a pretty awful content management system that didn’t even allow him to set his own pages titles or meta descriptions. Furthermore, he was lacking inbound links, which people ranked above him did have. This all sounds simple and straightforward but even after I had explained (in quite clear and non technical langauge may I add) the merits of good SEO and one page content the client simply wouldn’t accept this as a solution. He had his own short term and less costly solutions – basically revolving around the the idea me resubmitting his sitemap page to all the major search engines each day. I’m not debating that submitting a sitemap isn’t a good idea, because it is. However, the client’s main KPI for this project was increased site enquiries.

After much discussing this we had both come to a bit of an awkward silence – not a good thing if you’ve ever experinced this in client meetings. For some reason I remebered back to my unoversity days where I had read something about split testing (or A/B testing) – where you can turn a negative situation into a positive one.This is quite a delicate situation to be in as it can damage your client relationship quite quickly.

The idea was to use the client’s idea for a period of time and my idea for a period of time. At the start of this I would install Google Analytics (I was tempted to use Google’s website optimizer, but decided against it) and let the statisitics do the talking – as a no one can argue with statistics.

This method has been very useful previously when demonstrating the merits of creating a dedicated landing page for Google Adword campaigns, but can be used anywhere if you’re willing to a little bit extra.

This method is beneficial for the following reasons:

  • The client’s idea are being dismissed as wrong (however right you think you are)
  • You are showing the client that you care enough to demonstrate your ideas
  • Occassionaly the client will back down as soon as you explain your plan of attack
  • It prevents those awful awkward silences
  • You have a real world example to use in your other client meetings
  • You are speaking the clients language in that you are demonstrating how your actions lead to increased conversions
  • You are being direct, which I personally think is alwasy a good thing – as such statistics are often a huge eye opener for clients
  • If and when the client comes to the same conslusion as you, they won’t blame you

There is always the arguement that the client is the client and that it’s all business at the end of the day. However, I personally pride myself on doing things properly. Others will say just get on with, do what the client wants and forget about it – you can only offer your opinion. This is a good point but can still damage your client relationships when they return later on and you need to charge them again. It all depends if you require long or short term client relationships – as they are definately an investment.

Tracking Twitter Performance Using Google Analytics

If you use the ever popular twitter there’s a high chance you’ll be linking to your company webiste or personal blog in your tweets or profile link. As the aim is use twitter as a marketing tool to drive traffic, you can use Google Analytics to track the link you placed the twitter profile – just like an email campiagn or PPC advert

If you use Twitter as a marketing tool to drive traffic to your site then you should treat it in exactly the same way as you would a newsletter, a PPC advert or a banner and track each Tweet’s performance beyond simple click data. How many visits do you get, how long do they stay on your site, how deep do they go, what is the bounce rate like and how much revenue do they generate?

The benefit to ‘tagging’ this link is that Google Analytics will record more than use basic click data – you can record a whole host of advanced user data such as how they navigate your site and length of visit. By default Google will track such links, but traffic from services such as bit.ly will be dumped into the direct traffic area of Google Analytics. The steps to get the latter up and running are quite simple:

1: Go to Google’s URL builder to generate an url . Enter the following information:

Website URL: your website address

Campaign Source: enter a relevant source here to identify your campaign E.g. twitter

Campaign Name: enter a name used to identify the campaign, this is used to identify the campaign in Google Anlytics E.g. twittertracking

2: Click generate URL and something similar to the following will be created: http://www.web-design-talk.co.uk/?utm_source=twitter&utm_medium=social&utm_campaign=twittertrack


3: If posting to twitter you can paste this URL directly in the tweet box, as twitter will automatically shorten this url.

4: After approximately 24 hours data will appear in your analytics account. Simply navigate to Traffic Sources. If you’ve used the same terms to build the url as above you’ll see an entry called ‘twitter / social’. You can also view information by navigating to Traffic Sources > Campaigns where you can click the campiagn name (‘twittertrack’ was used in he example above).

Google Analytics Once Tracking is Installed

Google Analytics Once Tracking is Installed

Getting htaccess Mod-rewrite rules working locally with XAMPP

After spending a whole 2 hours of my life trying to get Apache mod-rewrite rules working with XAMPP on a local computer, I thought I’d share my results as I seemingly tried everything. The problem, I have a simple mod-rewrite rule in my htaccess file. When I upload this to my online web host everything is fine – the working htaccess file for my online host:

RewriteBase /
RewriteEngine on
RewriteRule amnesia/resetpass(.*) recover-password.php$1 [PT]

So typing in www.domain.com/amnesia/resetpass does a simple re-write to www.domain.com/recover-password.php, without the user ever knowing. All is fine. However, when I treid to get this seemingly simple rule to work with XAMPP I ran into problems, getting 404 and 500 responses from the server – obviously quite a pain as this essentially means I can’t test the site using my own web server (E.g. localhost). The site hosted from my computer via the normal setup E.g. xampp/htdocs/mysite. I’ll jump straight to the solution and then explain exactly what things were changed – the working htaccess file is below:

RewriteEngine on
RewriteBase /mysite
options +FollowSymLinks
RewriteRule amnesia/resetpass(.*) recover-password.php$1 [PT]

Firstly, the extra line that uses the +FollowSymLinks directive was added. To explain this I’ll quote straught from the Apache documentation:

To enable the rewriting engine for per-directory configuration files, you need to set “RewriteEngine On” in these files andOptions FollowSymLinks” must be enabled. If your administrator has disabled override of FollowSymLinks for a user’s directory, then you cannot use the rewriting engine. This restriction is needed for security reasons.

The re-write base has been changed to the relative path of the website directory. To finish up, open the http.conf file (the default settings for XAMPP, that get overwritten with you .htaccess file rules on a directory basis), located by default at C:\xampp\apache\conf\http.conf. Find all occurances of AllowOverride None and change it to AllowOverride All. After restarting XAMPP everythign should work. In a nutshell changing the AllowOverride directive in the http.conf file decalres which directives in .htaccess files can override directives from httpd.conf, this is discussed in more dept over here, but basically by having this directive set to None, you’re stopping individual htaccess files from working locally.

SEO Friendly URLs With Mod Rewrite

So called ‘dirty URLS’ (E.g. www.domain.com/products.asp?id=45&cat=34&mode=view) not only look untidy but also pose a security risk as they expose the underlying technology used, in this case, ASP. A much preferred URL in this case would be www.domain.com/product/45 or even better, www.domain.com/product/product_keywords-here. The latter URL structure not only improves useability for your site (the URL makes more sense to the user) and is argued to improve search engine rankings. There is a lot of debate on this subject, but everyone agress that these so called pretty URL’s don’t hurt anything and mainly improve user experience.  Google has also recently posted a video (obviously not giving much away) saying that SEO friendly URL’s do in fact make a small difference and don’t hurt SERPs’

Take the example of this very blog. Pretty urls are used to display the post title and id within the url. There is the option to simply include the title, but this has been proven to slow down general performance of your blog. I digress, let’s get onto some examples where simple URL rewriting with mod rewrite is useful.

Continue reading

CSS Sprities and Website Optimization

One of the latest and most well established design practices are CSS sprites. This is the practice of combing multiple images into a larger and single, composite image. By using the CSS background-position property selected portions of that master image (or sprite) are displayed. The main issue here is how can a larger image witha larger file size be beneficial, especially when compared to several smaller images? The answer lies in HTTP requests and Yahoo’s 80/20 rule explains this much better than I could! To summarise, the numbers of HTTP requests to the websites is drastically cut, thus loading the page much faster in  single request. Another major beenfit is that not Javascript is required for mouseover code, so you can make image rollovers easily. I have used this technque in the past, but like a lot of people never knew it was called sprites.

In fact, using sprites are so effective many of the internets biggest site’s are using them, all in slightly different ways. On such sites a truely huge number of requests are saved every day. For example, Youtube, Google, AOL,  Amazon and Apple all use CSS sprites. Take the mimilist example of Google (left):

Google CSS Sprite

Google CSS Sprite

Youtube, does things slightly different and uses a absolutely massive sprite 2800px in height! You’ll notice that some sprites are tightly packed together and other have spacing around each image. This is to allow for browser based text sizing, that would otherwise display multiple background images. A good example of planning for such an occurance is Digg’s sprite, where each image is highly spaced out.

The CSS is relatively simple. Take the example of a simple unordered list with a different image for each item. I made a very simple sprite and applied the background-position property to each link class. Here’s the simple HTML used for the list: Continue reading