Using the new Google Chart API. Pretty data for the masses. So freaking cool.
The discussion that follows in the comments is some insight into the various sides of the “semantic web” debate, and the challenges that come with organizing so much data. The holy grail is not just categorizing or labeling all of the information, but knowing the relationships between it all.
Will users find enough value in the Freebase system to want to actively contribute to it? Why not start at Wikipedia as a framework and add the relationship layer to the existing site? What about the existing social/inter-personal relationship layer of Facebook and adding other layers on top of that (the Parakey acquisition could be a step in this direction)?
This is the weakness in having individual sites try to be “the answer” to such inter-related problems. Until all of these pieces are decentralized and opened, Iâ€™m afraid weâ€™re still stuck with a bunch of walled information gardens.
Unfortunately, the end goal of many of these efforts and the idea of the mythical “semantic web”, doesnâ€™t exactly have a place for single-resource destination sites like Facebook, Wikipedia, or Freebase. Given that none of them want to relinquish control any time soon, we should continue to see these power struggles for a long time to come.
So who is in a good position to bring us toward a more semantic web? Out of the big guys, I think Facebook is in one of the best places. As far-fetched as it may be, if they were to open up a true Facebook API, opening their social network for use to the outside (not forcing people to play around inside), they could leverage their huge user-base and be the social network provider that’s plugged into every new service out there. Somebody will have to do it. People are dying for the “web 2.0 address book.”
Lastly, I think web browsers are in perhaps the best position to take advantage of these evolutions. It may be no coincidence that the Firefox creators who started Parakey are now snatched up by Facebook. Look at what Greasemonkey and Firefox plugins have done to the way people view web pages. Look at what widgets, gadgets, feed readers, and the iPhone/Safari “platform” are doing to the way you consume/search/browse information from different sites and through different devices.
The semantic web might never arrive, but a semantic web may already be here.
Last Tuesday (wow, the week has flown by), I attended the second Ignite Seattle event at CHAC. Much like last time the event was packed and full of geek energy. I got there towards the end of the egg-launching contest, and didn’t get to see much because of the crowds, but was able to see a little flying yolk.
And then we got to the talks…
Last night I attended the first Ignite Seattle event, hosted by Make magazine and O’Reilly Radar. I took quite a few photos of the bridge-building contest and there are plenty more photos of the bridges and presentations in the Ignite Flickr pool. There was plenty of hot glue flying and some impressive bridges for a 30-minute time limit.
After the bridge contest and a short break, the Ask Later talks began, with the whirlwind format of 5 minute presentations, 20 slides, 15 seconds per slide (not under the speaker’s control). There were some really well organized talks for having to fit in just 5 minutes, and a wide range of topics. This roundup covers some of the highlights better than I can. RealityAllStarz got a good laugh, and many people seemed impressed by the Dorkbot presentation on technological art projects, which got some oohs and aahs from the crowd.
Scott Berkun gave an all-too-brief teaser of his upcoming book about innovation and the myths of innovation, pointing out a couple common misconceptions about famous innovators and “eureka” moments of discovery. I’m anxious to read more. Bre Pettis from Make magazine gave a really funny disjointed presentation on all sorts of random things he’s made, including a bat detecting watch. Damn cool. Buster McLeod from The Robot Co-op and 43 Things also gave an inspiring talk on the currency of motivation, and how motivation of yourself (through others) can help inspire you to tackle larger and larger challenges. He also mentioned his new venture, the McLeod Residence which is an art and technology gallery/bar in downtown Seattle, which sounds interesting. His hand-drawn slides were also great.
By far, the oddest, most confusing presentation was by Kathleen Dollard from GenDotNet. I still don’t know what exactly she was pitching, or whether it was coming from Microsoft or not. It was something (software? service? tool?) called “Workflow” which is designed to help engineers interact with their managers and coworkers better. It was literally a flowchart of “What do I do next?” for people who have zero interpersonal skills whatsoever. Say you e-mail the boss with a question and a) he doesn’t respond, b) he responds this way, c) he responds that way… here’s what you do next. I couldn’t help thinking that the whole thing was a joke, but it really wasn’t. Somebody next to me muttered, “It’s like Office Space the flowchart.” I’m sorry, but if you have individuals in your organization that can’t interact with each other, or their managers, the answer isn’t to give them a flowchart of how to work. I might suggest you instead look at finding some better managers or engineers that can work with each other. I could see a suite of development process flows being helpful to some organizations, but this example seemed like a little too much micromanagement.
Overall I thought the event was pretty interesting, especially considering I’ve missed the past Seattle Mindcamps. The CHAC Lower Level was a decent venue, although the setup of the main room and the single entrance caused a bottleneck. There was plenty of space in the room for people to stand and sit, but tables blocked people’s way. Also, having a loud DJ start in the bar area when people are still giving presentations was a bit obnoxious.
The presentations themselves were often more on the product/website/group promotion side. I would have liked more of the 5-minute presentations devoted to a single drilled-down topic, or more practical coverage of some subjects rather than the common, “Here’s the business/site I started, isn’t it cool?” Some of the presentations that seemed to work best were the editorializing on a specific area (motivation, innovation, startup funding…) rather than the tip-of-the-iceberg presentations of a really big topic (although it was fun seeing people jam those into 20 slides and 5 minutes).
I’m sure there will be plenty of refining for the next event, and I’m looking forward to see what comes of it. A big thanks to everyone who helped make it happen. I’ll see you next time.
The latest fury over the Facebook redesign and new features has quickly grown to ridiculous proportions. To catch up on the latest happenings, here’s the Techcrunch summary on the new features. And the follow-up about the outrage, with some clear explanations of the ridiculousness. Another good summary of the whole ordeal is available here. The Facebook CEO explains it just right when he wrote,
“The privacy rules haven’t been changed. None of your information is visible to anyone who couldn’t see it before the changes. Nothing you do is being broadcast; rather, it is being shared with people who care about what you do–your friends.”
I think most of the backlash is a result of the users just not understanding how or why the feature works the way it does. Friendster did this same exact thing via e-mail updates and “what’s new in your network” boxes on the site. There was little outcry there (probably because noone remembers/uses Friendster anymore), and Friendster even took it a step further and added the reverse-stalker feature a year ago. You can see a list of everyone who viewed your profile and when, whether they are in your immediate network or not. It was released, and turned on by default.
What really strikes me about the Facebook situation is the illusion of privacy that all these angry users are clinging to. You’re posting your semi-private information, to a semi-public site, where anyone in your semi-private network can view those details. Now your semi-private network can see those details and changes THAT YOU’RE PUBLISHING on the very same site, viewed in a slightly different way. Where did the confusion come from? How did users misunderstand the entire concept of the news feed? Did they just miss how easily-controllable all the details are? Are the ideas and technology (social networks, plus quick-reference news feed) just too new for them to wrap their heads around?
While browsing through all the anti-news feed groups that have sprung up in Facebook, I came across this gem of irony. A group was titled: Is it bad that I found the “against the news feed” group from the news feed?
Like some people with personal websites or weblogs, I have my resume sitting on my site, just hanging out. Every once in a long while I’ll get a recruiter who stumbles across it as they’re doing searches. I received this in an e-mail the other day:
I am a recruiter looking for some top talent to fill an exciting Software QA Tester position at Google.
There is a Google office that opened up not too long ago in Kirkland, Washington and apparently they’re looking to fill a number of QA positions (temporary assignments). I do have a bit of experience, but unfortunately I’m not too interested in moving back into QA, let alone on a temporary basis. The rest of the message was your typical technical job description and common sense requirements: “a quick learner, a great team player, and able to work independently…” But at the top of the requirements there was an intriguing line, surrounded by double asterisks:
** Some of the openings require extensive experience testing Flash applications and some game background. **
What is Google working on? Flash and games? Not a lot of Google’s products currently use Flash. There’s Analytics which was almost entirely acquired and not in-house. Google Video uses Flash to embed their videos, and Google Finance does some nifty Flash stock charts. Is there much else?
And games? What’s the plan there? Sure, it falls somewhere on the list of possible directions that Google could go. Yahoo has a huge user-base in their online games, and Google may want a piece of the pie. It would of course be a huge area for targetted advertising through the AdSense behemoth. Puzzle games, maybe an umpteenth Bejeweled clone, Google-tris… A Flash-based Google MMORPG? Although I think they’d make a real killing if they went with online card games and poker, using real-money powered by Google Checkout.
Anyway, if you’re interested in a temporary QA position in the Seattle area, the big “G” is up to something.
I’ve now been using Bloglines almost exclusively for over a week, for my weblog checking and reading. For ages now I’ve been using this static bookmark page as my homepage and I’ve still been hitting the various sites directly to check for updates and read new posts. My original plan for the page (as the name suggests) was to pull feeds for the sites and have it be a single-stop, quick-glance page where I could see what’s new. Continual problems with feeds and scripts gave me more errors than I wanted, and sent me off on trouble-shooting goose chases. I eventually stripped the page down to a link page and quite enjoyed visiting each site on it’s own. The setup still wasn’t ideal, and I found myself missing occasional updates.
After a number of recommendations, I decided to give Bloglines a try. At first I was half-assed about it and would only occasionally check things through Bloglines. To give it a real effort, I converted my default homepages over to the Bloglines page, and decided to use it exclusively for a week. It’s been pretty nice. I like the simple left-pane interface, to browse through your sites, and read updates on the right. I never got used to viewing updates for a whole group at a time (multiple sites’ feeds listed all together), so I still viewed updates site by site. With everything in the same window, and the numbers of new updates listed next to each site link, I found it incredibly quick to browse and catch up on the latest happenings.
As I’ve written before, I think there’s something to be said for visiting the actual site, rather than viewing someone’s content through a third party window. I’ve been stubborn about it, and it was with some reluctance I finally tried Bloglines. During the week, I still found myself visiting some of the original sites just in case I missed something. It just didn’t feel right when I wasn’t reading it on the original site. Then there are some sites out there that don’t give you full access to feeds, so I still had to visit on my own. Or there are the sites that break out their link and post feeds separately, so I was checking two feeds in Bloglines for a single site.
I’m not sure whether I like the feed reader experience better than my old-fashioned habits, but after a week I did get used to things. It certainly is easier to add and organize sites in the Bloglines list, rather than a lousy hand-edited HTML file. I might continue this way for a while. Does anyone prefer a site other than Bloglines? The Safari RSS option was also mentioned, but I’m on a PC much of the day. Or maybe something like Sage for Firefox could do the trick. Firefox live bookmarks are neat, but don’t provide a good at-a-glance overview. What are your preferred methods for reading the web?
In my Techcrunch party write-up the other day, I pondered a bit about the profitability of the various startups around. I’ve chatted a bit more with some friends about Redfin in particular, and how well their model of selling houses online is going to fare. I ran across this blog post, actually written just before last week’s party, which dissects some of the numbers quoted in this Seattle PI article about Redfin’s sales to-date. Whichever numbers are correct; 40 homes at $18 million ($180k commission), or 13 homes at $7 million ($70k commission), I think they’re fairly impressive for having their direct service running for just 5 months (and still at just 25 employees).
The PI article mentions that ZipRealty sold $900 million worth of homes in the first three months of the year (with a shocking 1,400 agents!). Applying Redfin’s “measly” 1% commission to that and we’re talking $9 million in income. Yeah, yeah, so what do all these numbers mean? I’m no economics genius, but it seems clear that the online home buying business scales nicely. Despite having to hire numbers of agents to man phones and process paperwork, manage offers, etc. the throughput of a polished web-based real estate system is always going to be faster (not to mention cheaper to the buyer) than going through it the old-fashioned way. Also, considering only a fraction (maybe half?) of Redfin’s employees are currently agents, they’re currently more efficient (by either sales figure) than ZipRealty (for the time being).
Redfin is in it’s infancy, and the 360Digest post mentions that a small $70k commission sum might not be worth an $8 million investment round. I would argue that the ZipRealty example demonstrates that the idea scales very nicely, and can easily make that $9 million back with the right throughput of sales. I think Redfin is in good shape, and I’m really rooting for them as they’ve taken on the San Francisco market. Expanding means hiring more of their pseudo-agents to handle the sales, but the more they can streamline their core application, the more sales they can push through, and so on…
Last night, my friend Darren and I attended the TechCrunch Seattle Web 2.0 Party. The event was sponsored by local startups: Redfin, Farecast, and TripHub. It was the expected shmooze-fest of young, un-profitable new businesses, established big-guns chatting and hinting at their grand plans, and plenty of regular folk all wanting a piece of the pie. Hereâ€™s the summary of the few demos saw and various conversations we had…
It’s hard to browse Flickr much now without running into a lot of these eerily-lit, surreal, colorful photos. These photos tagged with “HDR” are quite common in Flickr’s daily “interestingness“, and the HDR group pool is seeing a lot of activity. Here are a couple shots I took trying out this new technique:
So what the heck is HDR? It stands for High Dynamic Range, and the Wikipedia entry on HDR imaging does a good job of explaining it. Now, the above images are not actually HDR images (as Andy corrected me early-on), they’re tone-mapped images generated from an HDR image. Seems like semantics, but it’s sort of an important distinction that’s been completely lost during this trend.
Using software such as Photoshop CS2 or Photomatix you load multiple exposures of a scene, including full exposure data (you’ll need to shoot in RAW), and the software combines them into a single HDR image. The image contains the varying exposure possibilities for highlights and shadows, using the starting images you gave it. You can think of it as giving you control over the actual light in different areas of the scene. The tone-mapping process is essentially a way of taking all that HDR information and generating an image that shows all the best-exposed parts. It brings out details in shadows, under-exposes bright highlights, etc.
So what’s the point of all of this? Well, the HDR techniques and algorithms have a lot of applications in computer graphics, effects and video games, where natural light is one of the toughest things to simulate. In photography though, this tone-mapping is just a processing step, not unlike a Photoshop filter, which makes for a pretty image. Like shooting in infrared, or macro, or lomo, it’s another tool (some would argue a gimmick) which creates a unique photographic look.
If you’re looking to play around with it, the Flickr HDR group has a lot of tips and links to resources. Looking for interesting things to shoot? Scenes with high light/dark contrast work well since you’ll be under/over exposing your shot to pick up the details in both extremes. Skies and clouds end up looking pretty neat, and so do reflections.
Craigslist was recently the target of this bitter little editorial in the San Francisco Bay Guardian. It received some smart(er) responses, including this one analyzing the UI of Craigslist and in particular, this rant by Anil Dash. Dash sheds some light on the decision-making of alternative weekly newspapers and in my experience working for one, I saw much of the same.
Update #2 — I’m also trying out this method for automatically posting daily del.icio.us links as blog posts. I’m setting up Flickr’s auto-blogging too, and if all works out OK, I might trim down all these side columns into one main stream of posts.
Update — It figures that WordPress 2.0.1 is released today and I have to upgrade again. At least it fixes something else that I’d been trying to troubleshoot.
I mentioned I upgraded last weekend, and just the other day I finally got around to fixing the commenting again.Â I’m not sure what killed it in the first place.Â After I gutted things and inserted new template code, and brought in a new comment template, it all worked again.Â It was probably a little mistake I made somewhere that broke it all, or there was some deprecated old syntax that i was still using.Â Anyway, it’s all back.Â And WordPress 2.0 itself is…
Recently the MIT Media Lab initiative, headed by Nicholas Negroponte, to develop a $100 laptop for distribution to schools and children in developing nations has been getting a lot of press. See articles at BBC News, the Wall Street Journal, and a Wired interview with Negroponte.
This $100 laptop initiative and similar projects before it, are based on the large assumption that there is a “digital divide” and that this divide needs to be closed by the UN, and those of us in first-world nations.
I’ve had a chance to see and play a bit with the 360 that Andy brought home the other day, and here are a few of my initial thoughts…
- The dashboard (OS) is great and gives you just about everything you could want out of a media center. Interface is a little less than intuitive at times, and the “friend is online” alerts appear over everything. Very annoying, but there’s probably an option to change that somewhere.
- Backwards compatibility is a little rocky. Some texture artifacting here and there, and really bad mic sound in Halo 2. But connectivity to other players on regular Xbox Live worked just the same.
- Games look great. Crisp, high-res, high polygon counts, yadda, yadda… But in Project Gotham Racing 3, the developers took too much advantage of the higher resolution and made some of the menu and UI text so small, it’s almost impossible to read on a normal TV. Wake-up, not everyone’s going to be playing on HD.
- Gameplay? Well, that all depends on the game, so… same as any other console. Aside from more glitz, glamour, bells and whistles, the launch titles are more of the same.
- Little things go a long way… Wireless controllers out of the box, power buttons on the controller, and standard USB ports to plug any device into the 360.
- It’s loud. When the fan comes on, it sounds like it’s ready for lift-off.
Fun to have one around, but I don’t think I need my own just yet.