Duplicate Content Duplicate Content Problems

Duplicate content can be real pain when you try to get your page indexed, from duplicate text to different urls for one page, bad internal structures and duplicate meta-tags, they all challenge the Googlebot and leave you with lower rankings for your site in the search engines. Several tools are available to track duplicate content, but starting with the following tips will already get you a lot further in your combat with duplicate content.

Duplicate Urls are often a result of a messy internal navigation and they do harm as the Googlebot will spot different urls that lead the same page. For example you could have one page for a product you sell online, but several links on your page pointing to it in a varied fashion and thus creating different urls for the same product page. (www.computers.com/dell and www.computers.com/brand=id34567). One url pointing to one page is optimal and will help you avoiding Google penalties for duplicate content.

The use of Metatags or Meta descriptions and their weight in the search algorythms is questioned, but to be absloutely sure of not negatively influencing your indexation it’s wise to avoid duplicates. Your Meta descriptions are in fact a strong lead for spyders and the effort of writing some decent meta descriptions is far more worth than for the potential harm of offering duplicate content.

The earlier mentioned spyders might also face problems when your content is nestled too deep in your code. A stuffed header and non-CSS sites in particular, challenge the spyders so streamlining your code is always a plus to give make sure all content is indexable. Big blocks of repeated information in the header is advised to avoid to serve the same purpose.

Obviously creating unique content is the best way to avoid duplication. Even when running different blogs or sites, how much time and money it may cost, don’t cut on unique content. Emphasize the important keywords, titles or links, as the <h1>, <h2>, <b> tags are an indicator for the bots to set pages apart and recognize page specific content.

//