Tuesday, January 31, 2012

Blogspot.com is redirecting to Blogspot.in

Now onwards Blogger blogs will redirect to country level TLD extension. Usually I read Google webmaster central blog, Google blog, Gmail blog etc to know about latest updates from Google. Today i.e. Jan 31st 2012, I observed that "Blogspot.com is automatically redirecting to Blogspot.in". As I live in india, it's redirecting to ".in". It might redirect to .co.uk, if I live in UK.


Here is the official information from Google regarding this change - Blogspot.com is redirecting to country specific URL

Points to know regarding this change:

1. Duplicate content issue is the 1st thing we notice in this case. However Google is stating that "rel=canonical" tag will be used across all country level extensions and their team is trying to make less negative impact on search results.

2. Google will receive so many requests to remove content from few blogs. So they would like to manage country wise removal of content . Few countries may not accept some content, but other countries will. Through this latest update, Content removed as per a country’s law will only be removed from the relevant ccTLD and available for other countries.

3. Custom domains will not see any affect. Free blogspot sites will just redirect to country wise extension, remaining all same.

4. If visitors would like to visit non-country specific version, Here is the format: http://domain.blogspot.com/NCR

Friday, January 20, 2012

Google Forecloses On Content Farms With “Panda” Algorithm Update


In January, Google promised that it would take action against content farms that were gaining top listings with "shallow" or "low-quality" content. Now the company is delivering, announcing a change to its ranking algorithm designed take out such material.

New Change Impacts 12% Of US Results
The new algorithm — Google’s "recipe" for how to rank web pages — starting going live yesterday, the company told me in an interview today.

Google changes its algorithm on a regular basis, but most changes are so subtle that few notice. This is different. Google says the change impacts 12% (11.8% is the unrounded figure) of its search results in the US , a far higher impact on results than most of its algorithm changes. The change only impacts results in the US. It may be rolled out worldwide in the future.

While Google has come under intense pressure in the past month to act against content farms, the company told me that this change has been in the works since last January.

Officially, Not Aimed At Content Farms
Officially, Google isn’t saying the algorithm change is targeting content farms. The company specifically declined to confirm that, when I asked. However, Matt Cutts — who heads Google’s spam fighting team — told me, "I think people will get the idea of the types of sites we’re talking about."

Well, there are two types of sites "people" have been talking about in a way that Google has noticed: "scraper" sites and "content farms." It mentioned both of them in a January 21 blog post:

We’re evaluating multiple changes that should help drive spam levels even lower, including one change that primarily affects sites that copy others’ content and sites with low levels of original content. We’ll continue to explore ways to reduce spam, including new ways for users to give more explicit feedback about spammy and low-quality sites.
As “pure webspam” has decreased over time, attention has shifted instead to “content farms,” which are sites with shallow or low-quality content.

I’ve bolded the key sections, which I’ll explore more next.

The “Scraper Update”
About a week after Google’s post, Cutts confirmed that an algorithm change targeting “scraper” sites had gone live:

This was a pretty targeted launch: slightly over 2% of queries change in some way, but less than half a percent of search results change enough that someone might really notice. The net effect is that searchers are more likely to see the sites that wrote the original content rather than a site that scraped or copied the original site’s content.
“Scraper” sites are those widely defined as not having original content but instead pulling content in from other sources. Some do this through legitimate means, such as using RSS files with permission. Others may aggregate small amounts of content under fair use guidelines. Some simply “scrape” or copy content from other sites using automated means — hence the “scraper” nickname.

In short, Google said it was going after sites that had low-levels of original content in January and delivered a week later.

By the way, sometimes Google names big algorithm changes, such as in the case of the Vince update. Often, they get named by WebmasterWorld, where a community of marketers watches such changes closely, as happened with last year’s Mayday Update.

In the case of the scraper update, no one gave it any type of name that stuck. So, I’m naming it myself the “Scraper Update,” to help distinguish it against the “Farmer Update” that Google announced today.

But “Farmer Update” Really Does Target Content Farms
“Farmer Update?” Again, that’s a name I’m giving this change, so there’s a shorthand way to talk about it. Google declined to give it a public name, nor do I see one given in a WebmasterWorld thread that started noticing the algorithm change as it rolled out yesterday, before Google’s official announcement.

Postscript: Internally, Google told me this was called the “Panda” update, but they didn’t want that on-the-record when I wrote this original story. About a week later, they revealed the internal name in a Wired interview. “Farmer” is used through the rest of this story, though the headline has been changed to “Panda” to help reduce future confusion.
How can I say the Farmer Update targets content farms when Google specifically declined to confirm that? I’m reading between the lines. Google previously had said it was going after them.

Since Google originally named content farms as something it would target, you’ve had some of the companies that get labeled with that term push back that they are no such thing. Most notable has been Demand Media CEO Richard Rosenblatt, who previously told AllThingsD about Google’s planned algorithm changes to target content farms:

It’s not directed at us in any way.
I understand how that could confuse some people, because of that stupid “content farm” label, which we got tagged with. I don’t know who ever invented it, and who tagged us with it, but that’s not us…We keep getting tagged with “content farm”. It’s just insulting to our writers. We don’t want our writers to feel like they’re part of a “content farm.”
I guess it all comes down to what your definition of a “content farm” is. From Google’s earlier blog post, content farms are places with “shallow or low quality content.”

In that regard, Rosenblatt is right that Demand Media properties like eHow are not necessarily content farms, because they do have some deep and high quality content. However, they clearly also have some shallow and low quality content.

That content is what the algorithm change is going after. Google wouldn’t confirm it was targeting content farms, but Cutts did say again it was going after shallow and low quality content. And since content farms do produce plenty of that — along with good quality content — they’re being targeted here. If they have lots of good content, and that good content is responsible for the majority of their traffic and revenues, they’ll be fine. In not, they should be worried.

More About Who’s Impacted
As I wrote earlier, Google says it has been working on these changes since last January. I can personally confirm that several of Google’s search engineers were worrying about what to do about content farms back then, because I was asked about this issue and thoughts on how to tackle it, when I spoke to the company’s search quality team in January 2010. And no, I’m not suggesting I had any great advice to offer — only that people at Google were concerned about it over a year ago.

Since then, external pressure has accelerated. For instance, start-up search engine Blekko blocked sites that were most reported by its users to be spam, which included many sites that fall under the content farm heading. It gained a lot of attention for the move, even if the change didn’t necessarily improve Blekko’s results.

In my view, that helped prompt Google to finally push out a way for Google users to easily block sites they dislike from showing in Google’s results, via Chrome browser extension to report spam.

Cutts, in my interview with him today, made a point to say that none of the data from that tool was used to make changes that are part of the Farmer Update. However, he went on to say that of the top 50 sites that were most reported as spam by users of the tool, 84% of them were impacted by the new ranking changes. He would not confirm or deny if Demand’s eHow site was part of that list.

“These are sites that people want to go down, and they match our intuition,” Cutts said.

In other words, Google crafted a ranking algorithm to tackle the “content farm problem” independently of the new tool, it says — and it feels like tool is confirming that it’s getting the changes right.

The Content Farm Problem

By the way, my own definition of a content farm that I’ve been working on is like this:

1. Looks to see what are popular searches in a particular category (news, help topics)
2. Generates content specifically tailored to those searches
3. Usually spends very little time and or money, even perhaps as little as possible, to generate that content
The problem I think content farms are currently facing is with that last part — not putting in the effort to generate outstanding content.

For example, last night I did a talk at the University Of Utah about search trends and touched on content farm issues. A page from eHow ranked in Google’s top results for a search on “how to get pregnant fast,” a popular search topic. The advice:


The class laughed at the “Enjoyable Sex Is Key” advice as the first tip for getting pregnant fast. Actually, the advice that you shouldn’t get stressed makes sense. But this page is hardly great content on the topic. Instead, it seems to fit the “shallow” category that Google’s algorithm change is targeting. And the page, there last night when I was talking to the class, is now gone.

Perhaps the new “curation layer” that Demand talked about in it earnings call this week will help in cases like these. Demand also defended again in that call that it has quality content.

Will the changes really improve Google’s results? As I mentioned, Blekko now automatically blocks many content farms, a move that I’ve seen hailed by some. What I haven’t seen is any in-depth look at whether what remains is that much better. When I do spot checks, it’s easy to find plenty of other low quality or completely irrelevant content showing up.

Cutts tells me Google feels the change it is making does improve results according to its own internal testing methods. We’ll see if it plays out that way in the real world.

Why Google Panda Is More A Ranking Factor Than Algorithm Update

Google Panda
With Google Panda Update 2.2 upon us, it’s worth revisiting what exactly Panda is and isn’t. Panda is a new ranking factor. Panda is not an entirely new overall ranking algorithm that’s employed by Google. The difference is important for anyone hit by Panda and hoping to recover from it.

Google’s Ranking Algorithm & Updates
Let’s start with search engine optimization 101. After search engines collect pages from across the web, they need to sort through them in demand to searches that are done. Which are the best? To decide this, they employ a ranking algorithm. It’s like a recipe for cooking up the best results.

Like any recipe, the ranking algorithm contains many ingredients. Search engines look at words that appear on pages, how people are linking to pages, try to calculate the reputation of websites and more. Our Periodic Table Of SEO Ranking Factors explains more about this.

Google is constantly tweaking its ranking algorithm, making little changes that might not be noticed by many people. If the algorithm were a real recipe, this might be like adding in a pinch more salt, a bit more sugar or a teaspoon of some new flavoring. The algorithm is mostly the same, despite the little changes.

From time-to-time, Google does a massive overhaul of its ranking algorithm. These have been known as "updates" over the years. "Florida" was a famous one from 2003; the Vince Update hit in 2009; the Mayday Update happened last year.

Index & Algorithm Updates
Confusingly, the term “updates” also gets used for things that are not actual algorithm updates. Here’s some vintage Matt Cutts on this topic. For example, years ago Google used to do an “index update” every month or so, when it would suddenly dump millions of new pages it had found into its existing collection.

This influx of new content caused ranking changes that could take days to settle down, hence the nickname of the "Google Dance." But the changes were caused by the algorithm sorting through all the new content, not because the algorithm itself had changed.

Of course, as said, sometimes the core ranking algorithm itself is massively altered, almost like tossing out an old recipe and starting from scratch with a new one. These "algorithm updates" can produce massive ranking changes. But Panda, despite the big shifts it has caused, is not an algorithm update.

Instead, Panda — like PageRank — is a value that feeds into the overall Google algorithm. If it helps consider it as if every site is given a PandaRank score. Those low in Panda come through OK; those high get hammered by the beast.

Calculating Ranking Factors
So where are we now? Google has a ranking algorithm, a recipe that assesses many factors to decide how pages should rank. Google can — and does — change some parts of this ranking algorithm and can see instant (though likely minor) effects by doing so. This is because it already has the values for some factors calculated and stored.

For example, let’s say Google decides to reward pages that have all the words someone has searched for appearing in close proximity to each other. It decides to give them a slightly higher boost than in the past. It can implement this algorithm tweak and see changes happen nearly instantly.

This is because Google’s has already gathered all the values relating to this particular factor. It already has stored the pages and made note of where each word is in proximity to other words. Google can turn the metaphorical proximity ranking factor dial up from say 5 to 6 effortlessly, because those factors have already been calculated as part of an ongoing process.

Automatic Versus Manual Calculations
Other factors require deeper calculations that aren’t done on an ongoing basis, what Google calls “manual” updates. This doesn’t mean that a human being at Google is somehow manually setting the value of these factors. It means that someone decides its time to run a specific computer program to update these factors, rather than it just happening all the time.

For example, a few years ago Google rolled out a "Google Bomb" fix. But then, new Google Bombs kept happening! What was up with that? Google explained that there was a special Google Bomb filter that would periodically be run, since it wasn’t needed all the time. When the filter ran, it would detect new Google Bombs and defuse those.

In recipe terms, it would be as if you were using a particular brand of chocolate chips in your cookies but then switched to a different brand. You’re still "inputting" chocolate chips, but these new chips make the cookies taste even better (or so you hope).

NOTE: In an earlier edition of this story, I’d talked about PageRank values being manually updated from time-to-time. Google’s actually said they are constantly being updated. Sorry about any confusion there.

The Panda Ranking Factor
Enter Panda. Rather than being a change to the overall ranking algorithm, Panda is more a new ranking factor that has been added into the algorithm (indeed, on our SEO Periodic Table, this would be element Vt, for Violation: Thin Content).

Panda is a filter that Google has designed to spot what it believes are low-quality pages. Have too many low-quality pages, and Panda effectively flags your entire site. Being Pandified, Pandification — whatever clever name you want to call it — doesn’t mean that your entire site is out of Google. But it does mean that pages within your site carry a penalty designed to help ensure only the better ones make it into Google’s top results.

At our SMX Advanced conference earlier this month, the head of Google’s spam fighting team, Matt Cutts, explained that the Panda filter isn’t running all the time. Right now, it’s too much computing power to be running this particular analysis of pages.

Instead, Google runs the filter periodically to calculate the values it needs. Each new run so far has also coincided with changes to the filter, some big, some small, that Google hopes improves catching poor quality content. So far, the Panda schedule has been like this:

1. Panda Update 1.0: Feb. 24, 2011
2. Panda Update 2.0: April 11, 2011 (about 7 weeks later)
3. Panda Update 2.1: May 10, 2011 (about 4 weeks later)
4. Panda Update 2.2: June 16, 2011 (about 5 weeks later)

Recovering From Panda
For anyone who was hit by Panda, it’s important to understand that the changes you’ve made won’t have any immediate impact.

For instance, if you started making improvements to your site the day after Panda 1.0 happened, none of those would have registered for getting you back into Google’s good graces until the next time Panda scores were assessed — which wasn’t until around April 11.

With the latest Panda round now live, Google says it’s possible some sites that were hit by past rounds might see improvements, if they themselves have improved.

The latest round also means that some sites previously not hit might now be impacted. If your site was among these, you’ve probably got a 4-6 week wait until any improvements you make might be assessed in the next round.

If you made changes to your site since the last Panda update, and you didn’t see improvements, that doesn’t necessarily mean you’ve still done something wrong. Pure speculation here, but part of the Panda filter might be watching to see if a site’s content quality looks to have improved over time. After enough time, the Panda penalty might be lifted.

Takeaways
In conclusion, some key points to remember:

Google makes small algorithm changes all the time, which can cause sites to fall (and rise) in rankings independently of Panda.

Google may update factors that feed into the overall algorithm, such as PageRank scores, on an irregular basis. Those updates can impact rankings independently of Panda.

So far, Google has confirmed when major Panda factor updates have been released. If you saw a traffic drop during one of these times, there’s a good chance you have a Panda-related problem.

Looking at rankings doesn’t paint an accurate picture of how well your site is performing on Google. Look at the overall traffic that Google has sent you. Losing what you believe to be a key ranking might not mean you’ve lost a huge amount of traffic. Indeed, you might discover that in general, you’re as good as ever with Google.

Wednesday, January 4, 2012

Latent Semantic Indexing

"LSI (Latent Semantic Indexing) is an ethnic way to get higher search engine placement with the use of synonyms rather than keyword density."

The process of retrieving relevant words or information from the content of your website is known as Latent Semantic Indexing (LSI). It has been a very remarkable topic in Information Retrieval System (IR system). Top search engines like Google, Yahoo and Bing work on latent semantic indexing system.

LSI is the different way of optimizing website content for various search engines using different and related keywords rather than exact keywords. It emphasis on use of main keywords a few times on the piece but there is no limit use of related words or key phrases in the content. So it is very important to write relevant information using relevant words on the web content. You may say it; LSI is a way for creating contextual web content that will be indexed by search engines and is optimized for giving incredible position in search results. It is different from earlier search engine optimization techniques in which keywords or key phrases are focused for website optimization.

Latent Semantic Indexing is a concept not a technique. You need to develop this concept in your mind while writing web content. You cannot fool search engines by inserting repetitive keywords without any other contextual content. Best SEO Copywriting Or LSI Copywriting is to write naturally about your subject not using only keywords and keywords. This unique SEO concept focuses on use of a series of relevant words to create SEO friendly contextual content. These words related to the primary keywords or key phrases that the webpage is being optimized for. The concept strictly focuses on avoiding repeating same keywords a specific number of times on the piece of web content. The related words should be used to avoid repetition. It is clear that, LSI is not a technique; it is common sense and about natural writing practice for the web.

Thanks to LSI SEO Concept. The concept is helping optimizers and SEO Copywriters to approach huge success in the field of internet marketing.

We at e Trade Services offer LSI SEO Services. We have expert professional content writers and search engine optimizers who understand well the concept of Latent Semantic Indexing.

Friday, December 30, 2011

Google Panda Update: Say Goodbye to Low-Quality Link Building

A while back, I wrote about how to get the best high volume links. Fast forward eight months and Google has made two major changes to its algorithm -- first to target spammy/scraper sites, followed by the larger Panda update that targeted "low quality" sites. Plus, Google penalized JCPenney, Forbes, and Overstock.com for "shady" linking practices.

What's it all mean for link builders? Well, it's time we say goodbye to low quality link building altogether.

'But The Competitors Are Doing It' Isn't an Excuse

This may be tough for some link builders to digest, especially if you're coming from a research standpoint and you see that competitors for a particular keyword are dominating because of their thousands upon thousands of pure spam links.

But here are two things you must consider about finding low quality, high volume links in your analysis:
1. Maybe it isn't the links that got the competitor where they are today. Maybe they are a big enough brand with a good enough reputation to be where they are for that particular keyword.
2. If the above doesn't apply, then maybe it's just a matter of time before Google cracks down even further, giving no weight to those spammy backlinks.

Because, let's face it. You don't want to be the SEO company behind the next Overstock or JCPenney link building gone wrong story!

How to Determine a Valuable Backlink Opportunity

How can you determine whether a site you're trying to gain a link from is valuable? Here are some "warning" signs as to what Google may have or eventually deem as a low-quality site.

>> Lots of ads. If the site is covered with five blocks of AdSense, Kontera text links, or other advertising chunks, you might want to steer away from them.

>> Lack of quality content. If you can get your article approved immediately, chances are this isn't the right article network for your needs. If the article network is approving spun or poorly written content, it will be hard for the algorithm to see your "diamond in the rough." Of course, when a site like Suite101.com, which has one hell of an editorial process, gets dinged, then extreme moderation may not necessarily be a sign of a safe site either (in their case, ads were the more likely issue).

>> Lots of content, low traffic. A blog with a Google PageRank of 6 probably looks like a great place to spam a comment. But if that blog doesn't have good authority in terms of traffic and social sharing, then it may be put on the list of sites to be de-valued in the future. PageRank didn't save some of the sites in the Panda update, considering there are several sites with PageRank 7 and above (including a PR 9).

>> Lack of moderation. Kind of goes with the above, except in this case I mean blog comments and directories. If you see a ton of spammy links on a page, you don't want yours to go next to it. Unless you consider it a spammy link, and then more power to you to join the rest of them.

What Should You Be Doing

Where should you focus your energy? Content, of course!

Nine in 10 organizations use blogs, whitepapers, webinars, infographics, and other high quality content to leverage for link building and to attract natural, organic links. Not only can use your content to build links, but you can use it to build leads as well by proving the business knows their stuff when it comes to their industry.

Have You Changed Your Link Building Strategy?

With the recent news, penalties, and algorithm changes, have you begun to change your link building strategies? Please share your thoughts in the comments!

Thursday, December 29, 2011

Web Analytics Year in Review 2011

Around the same time last year, we discussed how businesses were finally investing heavily in the tools, people, and processes required when operating data-driven organizations.

This year, an eConsultancy report estimates the UK web analytics technology and services sector alone to be worth more than £100 million annually. If we assume this number can be applied relative to GDP, that would put the web analytics technology and services sector well above $4 billion globally.

But as with anything web analytics related, sometimes concentrating on the numbers are not as important as the trend! The trend for total spend on internal staff, third party agencies and total vendor revenues appears to have grown by 12 percent year over year, certainly in the realm of "significant".

These were the top stories and trends of 2011.

Online & Offline Data Integration

What good is online intelligence without offline context? The integration of online and offline data was a focus for many organizations in 2011 because without this connection, it’s hard to understand the online contribution of marketing, channel of preference for task-level customer and prospect interaction, and customer satisfaction across channels. Without making this connection, it is nearly impossible to optimize online experience for lifetime value.

Social Media Analytics

Social media analytics diversifies with emphasis on business requirements. Many vendors and agencies started diversifying their service portfolios to cater to varied business and social media goals in 2011.

The industry gained a little clarity this year when several vendors started clearly categorizing their social media analytics into several use cases such as:

>> Monitoring and trend analysis.
>> Sentiment analysis and reputation management.
>> Workflow management.
>>Integrated social insights.

Although this sub-sector of analytics is far from mature, several large-scale companies are taking major steps to bridge the gap between social media analytics and cross-channel product offerings. Look for significant moves in this area for 2012.

Omniture SiteCatalyst Launches

Adobe announced the launch of Omniture SiteCatalyst 15 at the Omniture Summit in March this year. For those of us fortunate enough to be in attendance, it felt as if we were strapped into a fighter jet and just engaged afterburners. Adobe has done a great job integrating Omniture into their product portfolio, and the wow-factor for their presentation was nothing short of awe-inspiring.

I’ve always had a healthy love-hate relationship with Omniture, so luckily for them the hype associated with V15 was warranted! Some of my favorite features include real-time segmentation, a new bounce rate metric, ad-hoc unique visitor counts, and a new processing rules feature that makes server-side implementation tweaks very easy.

Salesforce.com Buys Radian6

Salesforce.com bought Radian6 for $326 million and brought cloud computing to a whole new level. What I like most about this deal is how naturally this acquisition can be folded into Salesforce’s CRM product.

‘Super Cookies’

Unfortunately it’s not all good news this year, as several companies (most notably Kissmetrics) were the recipients of some serious bad press and legal action for use of so-called “Super cookies” in July. These Flash-based cookies were blamed for a number of privacy concerns including cross-domain and cross-client visitor identification and re-spawning traditional cookies after being cleared from user browsers.

Mobile Analytics

This year marked the dawn of mobile analytics, especially after Apple rewrote their third-party tracking policies towards the end of 2010. As the mobile market continues to mature with increased pressure from the almost limitless supply of new Android handsets and operating systems, look for mobile analytics to take a larger share of attention in 2012.

Google Analytics Real-Time

Google Analytics Real-time debuted in the fall of this year, enabling millions of site owners across the globe watch user interaction as it happens, which is an exciting prospect for many. Although this feature set has been around for a while from vendors such as Woopra, it’s remarkable that Google would offer such a robust feature at no cost.

Google Encrypts Search Data

Almost immediately after any positive sentiment had tapered off from the introduction of real-time analytics, Google must have decided to test the waters with a carefully-measured negative announcement that they would be removing search query parameters for users of their secure (SSL) search results. The news didn’t go over too well amongst the online marketing community, and to this day the analytics community is still relatively sore on the subject, so don’t bring it up with your web analyst at the holiday party.

Google Chrome Passes Mozilla Firefox

More good news for Google surfaced in November when Google Chrome surpassed Mozilla Firefox in global browser share for the first time in history. Although it is too soon to tell what the effect will be on the analytics industry, one thing is certain: ensure your quality assurance and browser compatibility testing includes all three browser minorities.

Here’s to a safe and happy holidays and prosperous New Year!

Friday, December 23, 2011

How Advanced Marketers Will Use Facebook in 2012

As digital marketers, we’re frequently reminded magic formulas don’t really exist. Still, our experimentation and experiences often lead to insights about “what’s next.” Hopefully, the following insights and sample tools mentioned in this article will inspire your consideration (and actions) for 2012.

What Happened in 2011

For most brands, perhaps the most predominant focus with Facebook marketing in 2011 was growing the fan base. We saw a variety of custom Facebook applications (tabs) paired with Facebook ad buys – where requiring a Like (the becoming of a fan for that page) was the first or even final call to action.

As a result, some of the most common questions emerging were:
>> What’s the value of a Facebook fan?
>> How many Facebook fans shouldwe have?
>> Now that we have these fans, what should we do with them?
>> What can we be doing with Facebook outside of Facebook?

And honestly, many have even asked, “why are we doing this again?”

It’s The Data, Stupid

If you’re saying, “oh no, not another discussion on analytics or the latest changes in Facebook Insights,” fear not. This discussion goes beyond tracking simple key performance indicators (KPIs) within some marketing dashboard that spits out monthly reporting with +/- percentages.

On the contrary, it goes straight to the core of how companies can use a new breed of tools leveraging Facebook data to dramatically improve advertising results, content creation and overall business strategies. For the sake of brevity, we’ll take a quick look at two tools in particular: CalmSea and InfiniGraph.

CalmSea

CalmSea is a technology platform that enables you to create a conversion-based offer that can be accessed via a website, email, tweet, mobile device or Facebook page. As an example, let’s consider a coupon.

Normally, the basic data you would expect to collect with an online coupon might consist of clicks, shares and redemptions. Of course, you may also collect some demographics – or even additional data, depending on form-related entries required of the user in order to get the coupon.

The trick with CalmSea lies within an extra click that prompts your Facebook authorization in exchange for access to the coupon (or other offer). This authorization includes access to 3-4 of your Facebook permissions, which provides the CalmSea platform with multiple data points specific to your social graph (likes, interests, demographics, friends, etc.).

All of this activity can take place on any web page, including your ability to share the coupon with others on Facebook without actually ever going to your Facebook page.

When I spoke to Vivek Subramanian, VP of Products for CalmSea, he said they are seeing upwards of a 70 percent acceptance rate on the permissions authorization for branded apps (which could include coupons, sweepstakes, private sales, group buys and more).

The Power of The Data

CalmSea takes the Facebook user interactions and news feeds around the given offer – then combines that data with purchase/conversion analytics (could be Google Analytics) to aggregate and display insights on segments of users/customers with the highest levels of:

>> Engagement
>> Profitability
>> Influence

This kind of data goes beyond Facebook Insights, in that it enables you to build predictive models based on distinct attributes that best describe current and potential customers with respect to the three items listed above.



In the figure above, you can get a slight feel for CalmSea’s dashboard, which demonstrates, among other items, the ability to view social insights compared to purchase data insights on users who have authorized the offer.

Depending on your role in the company (media buyer, content creator, channel partner/affiliate manager, etc.) this kind of data ideally improves how and where you spend your time and money.

The initial offer you develop with a platform like CalmSea will likely have a consistent conversion rate with similar offers you may have conducted in the past. It’s the offers that follow, leveraging the data collected from your first use of the platform, that stand to produce significantly improved results.

InfiniGraph

The InfiniGraph platform aggregates Facebook and Twitter data for the purpose of identifying relevant (real-time) affinities, content and interests that are trending around a particular brand, product or industry. There are two key considerations with respect to how this platform’s output produces actionable value:

Improved performance on your Facebook ads: Gives you insights to new interests/keywords you should be targeting as part of your selection process within Facebook’s ad platform.

Insights to assist with content creation and curation:Gives you a clear picture and delivery mechanism for content that is trending via a content “Trend Score” that algorithmically combines likes, comments, clicks, retweets, and shares.

nfiniGraph’s approach to identifying content that’s trending on Facebook, in particular, provides a level opportunity that is certainly missed by many brands wishing to dive deeper into content strategy (check out the Digital Path to Social Media Success to view the four kinds of content you could be addressing).

To describe how this works, imagine a series of Facebook status updates that are posted about subject matter relevant to your fans (on your Facebook page or another Facebook page your fans follow).



In the sample from InfiGraph above, you can see the dates these status updates were posted, in addition to the enormous amounts of engagement they received. Here’s the problem: Think of how many fans of this page would also be interested in this content, but simply didn’t see it. Now think of how quickly those status updates will slide down the page and disappear.

As Chase McMichael, President of InfiniGraph, told me, "Humans can’t keep up with trending content, nor can they see how content trends across multiple Facebook pages containing fans with similar interest."

McMichael alludes to "crowdsourcing" of the human voice around collective interests and actions. Not only can this aid in the repurposing of content otherwise lost, but as McMichael so eloquently puts it: "you can know where to double-down from a media buying perspective. Who needs comScore when you have a resource that is guiding you where to advertise based on what a large audience is in essence telling you?"

Wrap-up

Although the summaries on these platforms don’t do them justice, my hope is you’ll be inspired to dig deeper regarding the possibilities they offer. It will be interesting to see how Facebook will continue enabling access to data, but I think it’s a safe prediction that advanced marketers will leverage it to the hilt.

On a final note, I’ll bid you a farewell to 2011 with my favorite quote of the year:

"Data will become the new soil in which our ideas will grow, and data whisperers will become the new messiahs." – Jonathan Mildenhall, VP of Global Advertising Strategy at Coca-Cola

Thursday, December 8, 2011

New Tagging Suggests Google Sees Translated Content As Duplicates

Last year Google launched meta tags for sites where a multilingual "template (i.e., side navigation, footer) is machine-translated into various languages but the "main content remains unchanged, creating largely duplicate pages." This week they have gone a step further and now include the ability to differentiate between regions that speak basically the same language with slight differences.

Like the canonical tag, the implementation falls on the website owners to do, in order to get "support for multilingual content with improved handling for these two scenarios:
1. Multiregional websites using substantially the same content. Example: English webpages for Australia, Canada and USA, differing only in price.
2. Multiregional websites using fully translated content, or substantially different monolingual content targeting different regions. Example: a product webpage in German, English and French."

This tagging is interesting and suggests Google knows when the content on a site is duplicate despite it being in a different language. Has their data storage the ability to translate, or just recognize words that are used in the same language but are regionally different? If I use "biscuit" on my UK or Australian sites in place of "cookies", does Google know they are the same word?

"If you specify a regional subtag, we'll assume that you want to target that region," Google tells us.

Is duplicate content now being measured for similar terms? Or are the tags a way to have website owners limit the pages Google index for regional areas? We add the tags and Google thins the pages we have showing in the SERPs for different regions?

Google shared some example URLs:
>> http://www.example.com/ - contains the general homepage of a website, in Spanish
>> http://es-es.example.com/ - the version for users in Spain, in Spanish
>> http://es-mx.example.com/ - the version for users in Mexico, in Spanish
>> http://en.example.com/ - the generic English language version

On these pages, you can use this markup to specify language and region (optional):
>> [link rel="alternate" hreflang="es" href="http://www.example.com/" /]
>> [link rel="alternate" hreflang="es-ES" href="http://es-es.example.com/" /]
>> [link rel="alternate" hreflang="es-MX" href="http://es-mx.example.com/" /]
>> [link rel="alternate" hreflang="en" href="http://en.example.com/" /]

Seems like many wouldn't bother installing the tags unless Google was to start dropping pages, or if the implementation helps improve regional rankings for the pages where publishers have gone that extra step and customized their content to specific regions and subtle language differences.

The hreflang tag has been around for quite some time. The W3 organization discusses it in 2006 and has it in its links in HTML documents list. This addition in to the head tag information seems to be a new twist. How Google uses the information for ranking will really determine if people will use it.

Tuesday, December 6, 2011

SEO is Both Science and Art

For many people who aren’t involved in search engine optimization (SEO) on a regular basis, it’s easy (or so they think). You simply create a website, write some content, and then get links from as many sources as you can.

Perhaps that works. Sometimes.

More often than not, the craft of SEO is truly a unique practice. It’s is often misunderstood and can be painfully difficult to staff for. Here’s why.

SEO is Science

By definition, “Science” is:

1. a branch of knowledge or study dealing with a body of facts or truths systematically arranged and showing the operation of general laws: the mathematical sciences.
2. systematic knowledge of the physical or material world gained through observation and experimentation.
3. any of the branches of natural or physical science.
4. systematized knowledge in general.
5. knowledge, as of facts or principles; knowledge gained by systematic study.

Anyone who has performed professional SEO services for any length of time will tell you that at any given time we have definitely practiced each of the above. In some cases, changes in our industry are so rapid that we crowdsource the science experiments among peers (via WebmasterWorld forums or Search Engine Watch forums).

Unfortunately, Google doesn’t provide step-by-step instruction for optimization of every single website. Every website is unique. Every optimization process/project is unique.

Every website represents new and interesting optimization challenges. All require at least some experimentation. Most SEOs follow strict methods of testing/monitoring/measuring so that we know what works and what doesn’t.

We have a few guidelines along the way:

1. Our “branch of knowledge” is well formed in what Google provides in their Webmaster Guidelines and SEO Starter Guide.
2. Our unique experience. Just like you might “learn” marketing by getting your bachelor’s degree in marketing, you really aren’t very good at it until you’ve worked in your field and gained real-world experience. There are so many things that you can read in the blogosphere regarding SEO that are complete crap. But, if you didn’t know any better, you’d buy off on it because “it sounds reasonable, so it must be true!” So, be careful to claim something is 100 percent “true” unless you have enough “scientific” evidence to back up the claim. Otherwise, it’s called “hypothesis”:
a. A supposition or proposed explanation made on the basis of limited evidence as a starting point for further investigation.
b. A proposition made as a basis for reasoning, without any assumption of its truth.

SEO is Also Art

By definition, art is:

the conscious use of skill and creative imagination especially in the production of aesthetic objects

I've worked with and befriended many incredibly bright SEOs in my years in this business. It is those who manage to blend the scientific skills with the creative thoughts on how to experiment/improve programs are the gems.

Getting creative with SEO is thinking of how a marketing program can encompass social, graphic design, link building, content generation, and PR to drive toward a common goal.

Getting creative with SEO is also about reworking a website’s design/code so that usability and accessibility improve, while maintaining brand guidelines and keeping with “look and feel” requirements, yet improving SEO.

Every day, we must get creative in determining how to best target keywords by determining which method of content generation gives us the best chance at gaining a presence in the search engines and – most importantly – engaging our audience.

Should we write a blog post? Should this be best handled in a press release? How about a video? Infographic? New “corporate” page on the site? There are a multitude of ways that we might determine to target a keyword via content.

The Perfect SEO

Today’s SEO is so much more involved than SEO of years past. When I hear people saying that they’re trying to determine if they should hire an in-house SEO or an agency, I will give them the pros and cons of each (and there sincerely are pros and cons of each).

But one factor which I believe leans toward the strength of an agency is that there’s typically going to be a team of individuals, each with a unique skill set. And, these individuals can share examples of what works and what doesn’t, with each other (scientific experiments often occur), they can bounce creative thoughts off of one another and collectively provide more value than any one person might.

Our industry needs more of these highly-skilled “freaks of nature” who blend both the scienctific skills and artistic creativity of SEO.

Friday, November 11, 2011

Google May Penalize Your Site for Having Too Many Ads

Google is looking at penalizing ad heavy sites that make it difficult for people to find good content on web pages, Matt Cutts, head of Google's web spam team, said yesterday at PubCon during his keynote session.

”What are the things that really matter, how much content is above the fold," Cutts said. “If you have ads obscuring your content, you might want to think about it,” inferring that a if a user is having a hard time viewing content that the site may be flagged as spam.

Google has been updating its algorithms over the past couple months in their different Panda updates. After looking at the various sites Panda penalized during the initial rollout, one of the working theories became that Google was dropping the rankings of sites with too many ads "above the fold."

This is an odd stance, considering Google AdSense Help essentially tells website publishers to place ads above the fold by noting, "All other things being equal, ads located above the fold tend to perform better than those below the fold."

Cutts also encouraged all websites that have been marked as spam and feel they should not have been marked as spam to report their sites to Cutts and his team. Cutts stated that he has a team of web spam experts looking into problem sites and that the Google algorithm still misses a site or two in its changes.

SEO is Not Dead, Is Always Evolving


Leo Laporte took the stage Tuesday as the keynote speaker at PubCon. Laporte talked about video and how getting your audience involved with you is the next step to online media.

Later, Laporte said he believes SEO will be dead in the next six months. As you'd expect, the crowd responded negatively to this assertion, even causing many of them to walk out.

Yesterday, Cutts responded during his keynote talk. Cutts started by setting the record straight, letting everyone in the audience know that SEO will still be here for the next six months, let alone the next six years.

Cutts joked by mentioning a a tweet about him "spitting out his morning coffee" in reaction to Laporte's statement the earlier morning. He thought it was more of a joke and laughed about the whole thing.

SEO will always be evolving, Cutts told the audience. Search will always be getting better, getting more personalized for each one of us. Google will always be striving to help people to get the best results possible while getting fresh real time results.

Later he talked about how if Google and every other search engine were to die that Internet marketing and SEO would still be alive because of social. Looks like SEO is here to stay!