javascripthtmlajaxseo

Design and SEO for a single page dynamic website with AJAX


I designed a website in which the whole site is contained within one page (index.php). Within the page, <section> tags define different parts of the site (home, contact, blog etc.)

Navigation is achieved by buttons that are always visible, and when clicked use javascript to change the visibility of the sections, so that only one is shown at any time.

More specifically, this is done by using the hash in the url, and handling the hashchange event. This results in urls such as www.site.com/#home (the default if no other hash is present) and www.site.com/#contact.

I want to know if this is a good design. It works, but I get the feeling there must be a better way to achieve the same thing? To clarify, I was aiming for site that loaded all the main content once, so that there were no more page loads after the initial load, and moving between sections would be smoother.

On top of this, another problem is introduced concerning SEO. The site shows up in google, but if for example, a search query contains a term in a specific section, it still loads the default #home page when clicked, not the specific section the term was found in. How can I rectify this?

Finally, one of the sections is a blog section, which is the only section that does not load all at once, since by default it loads the latest post from a database. When a user selects a different post from a list (which in itself is loaded using AJAX), AJAX is used to fetch and display the new post, and pushState changes the history. Again, to give each post a unique url that can be referenced externally, the menu changes the url which is handled by javascript, resulting in urls such as www.site.com/?blogPost=2#blog and www.site.com/?blogPost=1#blog.

These posts aren't seen by google at all. Using the Googlebot tool shows that the crawler sees the blog section as always empty, so none of the blog posts are indexed.

What can I change?

(I don't know if this should be on the webmasters stackexchange, so sorry if its in the wrong place)


Solution

  • Build a normal site. Give each page a normal URL. Let Google index those URLs. If you don't have pages for Google to index, that it can't index your content.

    Progressively enhance the site with JS/Ajax.

    When a link is followed (or other action that, without JS, would load a new page is performed) use JavaScript to transform the current page into the target page.

    Use pushState to change the URL to the URL that would have been loaded if you were not using JavaScript. (Do this instead of using the fragment identifer (#) hack).

    Make sure you listen for history events so you can transform the page back when the back button is clicked.

    This results in situations such as:

    1. User arrives at /foo from Google
    2. /foo contains all the content for the /foo page
    3. User clicks link to /bar
    4. JavaScript changes the content of the page to match what the user would have got from going to /bar directly and sets URL to /bar with pushState

    Note that there is also the (not recommended) hashbang technique which hacks a one-page site into a form that Google can index, but which is not robust, doesn't work for any other non-JS client and is almost as much work as doing things properly.