Monday, May 25, 2009

Extending Twitter


I've been thinking about Twitter even more since our class last week. When I get a few minutes this week, I'll have IT install TweetDeck on my laptop (I am not a "developer" so I am being saved from myself by the IT Gods).

My employer is actively using Twitter to address customer service issues, market, and improve our image (I can't get into details on some cool things we are doing). All the noise about Twitter got me thinking about the value of such a tool for corporate espionage.

I'm not advocating this. I'm wondering if Twitter will make it easier for the competition to sniff out new developments at other firms. Internally, what about people using Tweets as an electronic water cooler, chatting about work to whoever will listen?

It seems that efforts such as this will be high cost, low yield. Who to watch? Which tweets matter? Are people really foolish enough to Tweet w/o thought after Ketchum VP James Andrews' mea culpa Tweet in January? I've been hearing more office gossip lately; I don't expect that this stuff will show up online, whether Facebook, Twitter, or somewhere else.

In terms of corporate espionage, here's a recent story that is relatively low tech. What if someone hacks your email/Twitter account? The scenario is the same; the technology is just a little different.

Monday, May 18, 2009

Campaign Success Metrics

I was talking with a friend of mine last week. She told the tale of a project that just wrapped up which allowed placement of a sponsored product on the company's search results page. During the project, there was a debate about measurement from two camps: old school web and what I'll call pragmatists.

The old school web folks argued that knowing clicks & impressions was enough to demonstrate the success of a custome's ad purchase. Afterall, they argued, these are the metrics in the sales contract. The pragmatists argued that those measures are great, but have become nearly irrelevant without a direct link to conversion. They argued that the multivariate analysis needed to demonstrate an ad's success would be much more time-intensive than cost-effective. Moreover, customers wouldn't "buy" such a methodology.

I'm with the pragmatists. I don't find it compelling how many people saw and clicked on my ads. What I really want to know is the whole funnel, down to conversion [read: purchase]. I want my ad dollars tied to as much real data as possible. And if a company, like my friend's, approached me with a pitch for sponsored results, I'd be interested. If they told me that they were not able to measure conversion for previous customers, or for my own campaign, I'd turn them down (unless, of course, they were Google).

I was thinking about this when I read this article in Advertising Age. There are times when I can't/don't expect to measure conversion. Some of my campaigns may be a "success" if we increased brand awareness or improved brand perception. How do you set your goals for campaigns? Is there a component of each one which has a "softer", less measurable component reserved for brand awareness/perception? Is it reasonable to expect more?

Monday, May 11, 2009

Which 50%

My blog subject this week is my team’s work on the Google Online Marketing Challenge. We’re working hard to figure out which 50% of our paid search budget is meaningful, and which 50% is wasted. What I’m finding out through the project is that we’re learning what does *not* work more than what is working.

We know that our click through rates are low. Working for a web company, I know that these rates are low for most advertising, which softens some of the sting. We are seeing good impressions data, so we know we’re getting “looks”. The problem is that we were hoping for much more in the way of clicks. For the purpose of the assignment, clicks are our conversion measure. Given the constraints of time and web development money, this will work as a proxy for this project; if it were our business & these constraints were removed, we’d measure conversion differently.

Most of our keywords have a quality score of 7 or 8; we’ve wracked our brains to come up with additional variety. Is a 9 or a 10 attainable? As I alluded to above, if we could customize our landing pages and make a few other changes, we might earn a higher score. We’ve also pushed out as many different versions of ad copy that we can think of. We know which keywords get the most looks, and which ads have generated clicks. That leaves most of our initial list of keywords sitting “inactive” because they generated neither looks nor clicks. That is the easy part: these don’t work.

What we don’t know is how much we’re missing: what are our potential customers searching for before they find us. This is what everyone struggles with, and it means *at least* that we don’t know our customers as well as we should. Sure, we’ve interviewed some of them, and we know which search terms they use when they are on our site. But we cannot say which things our potential customers are looking for. Yet.

We’re working to read the things our customers read online, to get an idea of what other content and/or search terms they might see. We hope that will provide us with some new keywords and/or ad text in the last week & a half. We might also determine that some of the sites we think prospects view are not showing Google AdWords. If that is the case, it’ll be great information to pass along to our business partner.

We don’t want to tell our business partner about the 50% that doesn’t work. The way things are going, that might be as far as we get. The saving grace is that the money here was Google’s, and they have plenty of that.