Category Archives: Web

A few changes around the old site

I decided it was time for a few changes (hopefully improvements) around here. I figured a persistent navigation was a novel idea and might actually add some continuity to the rest of the site. I know there are some bugs. I forgot how much I loved tracking down weird CSS inconsistencies on seemingly-identical pages and code. I’ve still got some pages to build out for the different sections too, so that will be next.

I also took the blog to 2 columns with a wider content column and bumped up the font size a notch. There was also a lot to clean up work to do in the WordPress templates I had hacked together over the years. There’s still a bunch more I’ll get around to, but I think this was a good start. Let me know if you see anything weird as I continue to work out the kinks.

Santorum keeps on Spreading

Last weekend I attended The Stranger‘s Hump 2 screening (which was amusing, frightening, funny, disgusting and jaw-dropping all at the same time), and I chatted briefly with Dan Savage. He mentioned his little pet project, which I had worked on, and he said it was featured on the Daily Show not too long ago. A quick YouTube search, and there it is. Santorum segment starts at 1:30 into the video…

And thanks to Google’s algorithm change (I think we were previously hand-edited down to 2nd in the results) in the past few months, we’ve regained the #1 spot for “santorum“. With that one indirect mention on the Daily Show in July, check out the traffic spike on the site.

In the YouTube results I also found that santorum was featured in a full 2 minute spot on Googlecurrent_ which is part of the Current TV lineup. I’m glad to see a little Google-bombing has good staying power and keeps on spreading.

Facebook Frenzy

The latest fury over the Facebook redesign and new features has quickly grown to ridiculous proportions. To catch up on the latest happenings, here’s the Techcrunch summary on the new features. And the follow-up about the outrage, with some clear explanations of the ridiculousness. Another good summary of the whole ordeal is available here. The Facebook CEO explains it just right when he wrote,

“The privacy rules haven’t been changed. None of your information is visible to anyone who couldn’t see it before the changes. Nothing you do is being broadcast; rather, it is being shared with people who care about what you do–your friends.”

I think most of the backlash is a result of the users just not understanding how or why the feature works the way it does. Friendster did this same exact thing via e-mail updates and “what’s new in your network” boxes on the site. There was little outcry there (probably because noone remembers/uses Friendster anymore), and Friendster even took it a step further and added the reverse-stalker feature a year ago. You can see a list of everyone who viewed your profile and when, whether they are in your immediate network or not. It was released, and turned on by default.

What really strikes me about the Facebook situation is the illusion of privacy that all these angry users are clinging to. You’re posting your semi-private information, to a semi-public site, where anyone in your semi-private network can view those details. Now your semi-private network can see those details and changes THAT YOU’RE PUBLISHING on the very same site, viewed in a slightly different way. Where did the confusion come from? How did users misunderstand the entire concept of the news feed? Did they just miss how easily-controllable all the details are? Are the ideas and technology (social networks, plus quick-reference news feed) just too new for them to wrap their heads around?

While browsing through all the anti-news feed groups that have sprung up in Facebook, I came across this gem of irony. A group was titled: Is it bad that I found the “against the news feed” group from the news feed?

Enough said.

Google Working on Flash Games?

Like some people with personal websites or weblogs, I have my resume sitting on my site, just hanging out. Every once in a long while I’ll get a recruiter who stumbles across it as they’re doing searches. I received this in an e-mail the other day:

I am a recruiter looking for some top talent to fill an exciting Software QA Tester position at Google.

There is a Google office that opened up not too long ago in Kirkland, Washington and apparently they’re looking to fill a number of QA positions (temporary assignments). I do have a bit of experience, but unfortunately I’m not too interested in moving back into QA, let alone on a temporary basis. The rest of the message was your typical technical job description and common sense requirements: “a quick learner, a great team player, and able to work independently…” But at the top of the requirements there was an intriguing line, surrounded by double asterisks:

** Some of the openings require extensive experience testing Flash applications and some game background. **

What is Google working on? Flash and games? Not a lot of Google’s products currently use Flash. There’s Analytics which was almost entirely acquired and not in-house. Google Video uses Flash to embed their videos, and Google Finance does some nifty Flash stock charts. Is there much else?

And games? What’s the plan there? Sure, it falls somewhere on the list of possible directions that Google could go. Yahoo has a huge user-base in their online games, and Google may want a piece of the pie. It would of course be a huge area for targetted advertising through the AdSense behemoth. Puzzle games, maybe an umpteenth Bejeweled clone, Google-tris… A Flash-based Google MMORPG? Although I think they’d make a real killing if they went with online card games and poker, using real-money powered by Google Checkout.

Anyway, if you’re interested in a temporary QA position in the Seattle area, the big “G” is up to something.

A Week with Bloglines

Bloglines I’ve now been using Bloglines almost exclusively for over a week, for my weblog checking and reading. For ages now I’ve been using this static bookmark page as my homepage and I’ve still been hitting the various sites directly to check for updates and read new posts. My original plan for the page (as the name suggests) was to pull feeds for the sites and have it be a single-stop, quick-glance page where I could see what’s new. Continual problems with feeds and scripts gave me more errors than I wanted, and sent me off on trouble-shooting goose chases. I eventually stripped the page down to a link page and quite enjoyed visiting each site on it’s own. The setup still wasn’t ideal, and I found myself missing occasional updates.

After a number of recommendations, I decided to give Bloglines a try. At first I was half-assed about it and would only occasionally check things through Bloglines. To give it a real effort, I converted my default homepages over to the Bloglines page, and decided to use it exclusively for a week. It’s been pretty nice. I like the simple left-pane interface, to browse through your sites, and read updates on the right. I never got used to viewing updates for a whole group at a time (multiple sites’ feeds listed all together), so I still viewed updates site by site. With everything in the same window, and the numbers of new updates listed next to each site link, I found it incredibly quick to browse and catch up on the latest happenings.

As I’ve written before, I think there’s something to be said for visiting the actual site, rather than viewing someone’s content through a third party window. I’ve been stubborn about it, and it was with some reluctance I finally tried Bloglines. During the week, I still found myself visiting some of the original sites just in case I missed something. It just didn’t feel right when I wasn’t reading it on the original site. Then there are some sites out there that don’t give you full access to feeds, so I still had to visit on my own. Or there are the sites that break out their link and post feeds separately, so I was checking two feeds in Bloglines for a single site.

I’m not sure whether I like the feed reader experience better than my old-fashioned habits, but after a week I did get used to things. It certainly is easier to add and organize sites in the Bloglines list, rather than a lousy hand-edited HTML file. I might continue this way for a while. Does anyone prefer a site other than Bloglines? The Safari RSS option was also mentioned, but I’m on a PC much of the day. Or maybe something like Sage for Firefox could do the trick. Firefox live bookmarks are neat, but don’t provide a good at-a-glance overview. What are your preferred methods for reading the web?

How Red is the Redfin Fin?

In my Techcrunch party write-up the other day, I pondered a bit about the profitability of the various startups around. I’ve chatted a bit more with some friends about Redfin in particular, and how well their model of selling houses online is going to fare. I ran across this blog post, actually written just before last week’s party, which dissects some of the numbers quoted in this Seattle PI article about Redfin’s sales to-date. Whichever numbers are correct; 40 homes at $18 million ($180k commission), or 13 homes at $7 million ($70k commission), I think they’re fairly impressive for having their direct service running for just 5 months (and still at just 25 employees).

The PI article mentions that ZipRealty sold $900 million worth of homes in the first three months of the year (with a shocking 1,400 agents!). Applying Redfin’s “measly” 1% commission to that and we’re talking $9 million in income. Yeah, yeah, so what do all these numbers mean? I’m no economics genius, but it seems clear that the online home buying business scales nicely. Despite having to hire numbers of agents to man phones and process paperwork, manage offers, etc. the throughput of a polished web-based real estate system is always going to be faster (not to mention cheaper to the buyer) than going through it the old-fashioned way. Also, considering only a fraction (maybe half?) of Redfin’s employees are currently agents, they’re currently more efficient (by either sales figure) than ZipRealty (for the time being).

Redfin is in it’s infancy, and the 360Digest post mentions that a small $70k commission sum might not be worth an $8 million investment round. I would argue that the ZipRealty example demonstrates that the idea scales very nicely, and can easily make that $9 million back with the right throughput of sales. I think Redfin is in good shape, and I’m really rooting for them as they’ve taken on the San Francisco market. Expanding means hiring more of their pseudo-agents to handle the sales, but the more they can streamline their core application, the more sales they can push through, and so on…

Techcrunch Web 2.0 Party write-up

Last night, my friend Darren and I attended the TechCrunch Seattle Web 2.0 Party. The event was sponsored by local startups: Redfin, Farecast, and TripHub. It was the expected shmooze-fest of young, un-profitable new businesses, established big-guns chatting and hinting at their grand plans, and plenty of regular folk all wanting a piece of the pie. Here’s the summary of the few demos saw and various conversations we had…

Continue reading

How many monkeys does it take to write the web?

Continuing the discussion on social networks and user-contributed content, I started writing this as a comment on Alex’s post but it got long enough I decided to bring it over here as it’s own post…

In response to this article about the contributors to Wikipedia, Alex makes the point that Carr’s split of numbskulls vs. a few active contributors is too simplified, and that Wikipedia’s nature also favors a specialist/janitor split. This is not to suggest that Wikipedia is entirely specialists and janitors (the stories of jerks, spammers, and censors abound) but I see how a Wiki’s nature might attract more of that type of division.

Every community-organized/moderated site, or gasp “web 2.0″ app with a social network is going to have different types of folk in that 80-20 division. I ran into this mentioned in a couple other articles recently, and discovered it actually has a name; Pareto’s principle. If I’d taken more economics classes in college, I might have known. A site like Flickr may have more of the social-connectors in the 20% of their population, powering the majority of the groups, friends and favorites. And or digg might favor the dedicated blogger/web-surfers contributing the majority of popular links and stories. In economics it’s 20% of the population controlling 80% of the wealth. The same division was found (not surprisingly) with weblogs, where the top 10-20% of all weblogs (the notorious a-listers) were responsible for the majority of links (often back to themselves)1.

Every system is going to favor different types of splits, with a different subset of people. I really like that idea. Most of the time we think of these community-powered sites as massive networks of people working together, when they really aren’t. Flickr is just a couple thousand photo-enthusiasts culling through all the junk… Wikipedia is a combination of a few specialists, janitors and information hounds doing what they love… and the web is just a few obsessive web-surfers linking to everything. These sites and social networks aren’t powered by the masses, they’re powered by the dedicated niche users. And there-in lies Econ. 101, or something: Find a demand; fill the niche; and supply the masses2.

How’s that for super-generalized social and economic theory?

1 I need to find that article again.

2 And clean up their messes.

Happy 500 Posts!

This marks the 500th post on this weblog. It seems fitting that it’s exactly 6 years to the month from when I first started writing this crap on the web. This didn’t really start as a weblog as we now know it, but it still had the same idea. I’ve even still got an old version living here. I admit that 500 posts over 6 years isn’t the most impressive posting rate, at about 1 post every 4 days. But I guess it adds up.

Some of the highlights from the past few years…

It’s pretty fun browsing through my old posts, and having those things documented to varying degrees. They start out when I was still in school, and then through 3 jobs, various locations, quite a few trips and activities, and back again.

Here we are, and here’s to 500 more!

Pumping Iron for the Lord

A few days ago I checked my website stats and noticed that my top referrer was for a body-building site. I couldn’t figure out why in the world I was getting all of these hits. After digging into the lovely ABC Body-Building, I found that on the forum entrance page, one of the main forum posters was using the “Buff Jesus” image from my blog post way back here. That explained it. I poked around the forum a bit and it turns out this guy is basically Mr. Evangelical for the whole body-building forum, and since this image was his avatar/icon, it was showing up everywhere.

As general net-etiquette, it isn’t polite to link to images hosted on other people’s site directly. At the very least it isn’t good design to rely on random third-parties to “host” graphics, and it also leeches someone else’s bandwidth. So I decided to have a little fun with the body-building preecher, by changing the content of the image he was linking to…

Replaced Jesus

I feel a little bad that my immediate association with body-building was steroids, since I couldn’t find any evidence for their use by people frequenting this site. But with all of the crazy photos they post of themselves and others (Ahnoohhld!), it might be appropriate.

I do have to say that the ABC Body Building forum is home to one of the best discussion-thread titles I’ve ever seen. Posted by Mr. Uber-Christian himself in the “Sanctuary“:

And you get to read it with Steroid Needle Buddy-Christ winking at you.

UPDATE — And by the end of the day he caught on to it and replaced the new Buddy Christ image with something else. It was still up there for a full day. Not bad.

Pure Whey Jesus

So much for having any more fun with that guy. Until next time…

Ego-crawling: How Popular is Your Name?

I’ve got to give a lot of credit to one of the search marketing brains at for a really interesting idea. We’ve recently rolled out some pages listing People Name Popularity. It’s currently limited to a very few names as an initial test, but the ultimate result would be a large directory of names, ranked by popularity (based on searches on our site). Interested in how popular the firstname James is? How about the lastname Smith? And there’s plenty of food for the ego surfers too. When James Otepka decides to Google himself, he gets some interesting (and hopefully ego-boosting) info from our directory. Why bother with all of this? Why not? We have the data and search history, so let’s make use of it. The extra traffic from Google might not hurt either. It’s worked for some less reputable folk out there.

The Names Database took a very different tactic and called themselves a Reunion/Classmates type connection site. Just give them your name and an e-mail address… and then another 5 names and e-mail addresses… and then a monthly fee… and then maybe you can find someone. Meanwhile they built out a massive static “directory” of their (your) names. If/when you actually get to a page for a particular name, it’s just a plain unusable list of as many or as few names as possible. Oh, and a whole lot of irrelevant Google AdSense ads. But… it all worked for them. They show up on Google results pages for plenty of uncommon names. And all of those e-mail addresses (valid or not) that they collected, garnered enough attention, and fetched a $10 million pricetag.

The Uber Google Bombs

Google bombing has been a fun pasttime for troublemakers and pranksters trying to get certain search words or phrases to return humorous and ironic results on Google. I’ve been involved with one wildly successful campaign, and have gotten some good laughs at others. I started wondering what some of the most highly-linked phrases on the web must be.

Celebrity names, news items or events are all too obvious…  How about, “click here“?  5 billion results on Google.  Top results include a Currency Calculator for some reason, and then the expected ones: Adobe Acrobat Reader download, Netscape, Quicktime, and Macromedia Flash.  Think about all of those links around the web, where somebody points to one of those utility downloads with with phrase “click here”.  There are a few perfect 10 Google pageranks in there.  (It’s rather odd that eBay is the only sponsored link and it takes you straight to an eBay search for “click here”.)

What others might be up there?  Plain ‘ol “here” returns 8 billion results, and Realplayer tops this list.  Other big ones are “get it” and “get it now”.  And with 14 billion results, “home” is a clear winner in this non-comprehensive roundup.  A huge percentage of sites on the internet have tons of internal “home” links.  The site that beats them all out:  Incredibly deep content, good internal linking, good external linking to boost your rank even more and a perfect 10.  Easy lessons to learn.  I’m sure that .gov domain doesn’t hurt much either

Idea: Bookmark Reminders

When I find something particularly interesting while surfing the web, I often it, or bookmark it in my browswer. The problem is that I’m really bad about referring back to these two places. If I’ve just bookmarked a new weblog or news site that will continue to have interesting content in the future, I rarely find myself returning. It’d be nice if either built into the browser’s bookmarks, or had an option to be reminded about a bookmark. Maybe it sends you a quick e-mail a month from now, mentioning the site again. Or your web browser could just open a tab with the site, on that day in the future.

It’s almost like an intermediate step before adding the site to a feed reader or a start page. I don’t have that much invested in the site yet, I just want to remember to come back and take a look in the future and reevaluate. If it’s still developing, I could reset the reminder. Or if I like it, I can add it to my regular browsing routine or feed reader.

Or maybe I just need to work on some more structured browsing/bookmarking habits and organize all these scattered bits. Nah, that’ll never work.

Content Streams

I’ve decided to ditch the photos from the main blog posts for a number of reasons. 1) I found it was affecting my photo posting habits. I felt less inclined to upload a large batch of photos (which I often like to do), because I knew it’d make for an odd flurry of photo-only blog posts all at once. 2) Flickr‘s blog posting is anything but automatic. For every photo I posted to Flickr, I then had to click the “blog this” button, confirm a couple times, and then log into WordPress and correct the category and style quirks that Flickr wouldn’t allow me to customize. 3) Reader complaint. Williamsburger mentioned that he’s already my Flickr contact, so he was seeing all of my photos twice.

I do think I like the auto-posting from though, and I think I’ll keep it. For one, it’s actually automatic, and lets me specify a particular category. I like the bulleted list it spits out (mmmm… lists), and the added pressure it puts on me for more regular actual content.

I have to say, the idea of a single continuous stream of all the disparate info we collect and post on the web is really appealing, but it isn’t perfect yet. The new holy grail of a single-column, single feed, single stream of content from a person, rather than jumping from site to site (or column to column) to pick up each of their mini-streams. Then again, I still don’t even use an RSS or feed reader myself to track my regular websites. I guess there’s something to be said for that scattered but still organized separation of information.

The archeological and biological term in situ just popped in my head and seemed somewhat relevant. On the internet, is there still value in seeing information in it’s original setting? Or is the type of information and content, and not to mention medium, different enough to adapt to repackaging, feeding, and other metamorphoses? (metamorphosi? metamorphosises?)

If we’re all just feed-reading, why do we still have web pages at all? What do you think of single vs. disparate content streams?